Apache Ambari Meetup - June 18 6PM PDT @ San Jose Convention Center (San Jose, CA, USA)

2018-06-04 Thread Yusaku Sako
Hi all,

Mark your calendar for the upcoming Apache Ambari Meetup on June 18, 6PM PDT at 
San Jose Convention Center.

The details of the meetup can be found here:
https://www.meetup.com/Apache-Ambari-User-Group/events/250859836/

The event is free and open to everyone.

Hope to see you there!

Yusaku


Re: proper Ambari permissions

2017-07-11 Thread Yusaku Sako
Files View accesses HDFS as the current Ambari user that's logged in.
The output below (assuming that's coming from Files view) shows that you were 
logged in as "admin" in Ambari and tried to delete files/dirs belonging to 
"root" with 755 permissions.
This will not work as the "admin" user does not have any superuser permission 
in HDFS (from HDFS's perspective, it's just a regular user named "admin")

Here's a workaround that I can think of:
If you want to use Files View to manage all files/dirs in HDFS, you can create 
a user called "hdfs" in Ambari, and log into Ambari as that user.
Then when you operate on HDFS files/dirs with Files View.

Yusaku


From: Sumit Mohanty
Reply-To: "user@ambari.apache.org"
Date: Tuesday, July 11, 2017 at 10:45 PM
To: "user@ambari.apache.org"
Subject: Re: proper Ambari permissions


Oh! these are files in HDFS.


You can delete them through HDFS command line after logging as the hdfs user. 
Something like


su hdfs

hdfs dfs -ls /user/
drwxrwx---   - ambari-qa hdfs  0 2017-07-12 03:37 /user/ambari-qa​
hdfs dfs -rmr /user/ambari-qa


I am not familiar with the usage of File View but looks like creating and 
logging in as user "hdfs" should work.


From: Adaryl Wakefield 
mailto:adaryl.wakefi...@hotmail.com>>
Sent: Tuesday, July 11, 2017 9:57 PM
To: user@ambari.apache.org
Subject: RE: proper Ambari permissions

Actually they aren’t even all files. I can’t blow away directories either. The 
files that I do have are the sample salaries data you can get from doing the 
file management tutorial from Hortonworks.
wget 
https://raw.githubusercontent.com/hortonworks/data-tutorials/893ba0221e2c76c91e9e2baa030323a42abcdf09/tutorials/hdp/hdp-2.5/manage-files-on-hdfs-via-cli-ambari-files-view/assets/sf-salary-datasets/sf-salaries-2011-2013.csv

Below is the error I get when I try to delete stuff from the GUI:

permission denied: user=admin, access=WRITE, 
inode="/user/hadoop/sf-salaries":root:hdfs:drwxr-xr-x

Permission denied: user=admin, access=WRITE, 
inode="/user/hadoop/sf-salaries-2011-2013/sf-salaries-2011-2013.csv":root:hdfs:drwxr-xr-x


Adaryl "Bob" Wakefield, MBA
Principal
Mass Street Analytics, LLC
913.938.6685
www.massstreet.net
www.linkedin.com/in/bobwakefieldmba
Twitter: @BobLovesData


From: Sumit Mohanty [mailto:smoha...@hortonworks.com]
Sent: Tuesday, July 11, 2017 11:04 PM
To: user@ambari.apache.org
Subject: Re: proper Ambari permissions


​Can you provide example of some of the files?



In general Ambari runs as root unless configured with a custom user.  Some of 
the files it manages may be created as the service user (say HDFS data 
directories are owned by HDFS service user).



-Sumit


From: Adaryl Wakefield 
mailto:adaryl.wakefi...@hotmail.com>>
Sent: Tuesday, July 11, 2017 8:30 PM
To: user@ambari.apache.org
Subject: proper Ambari permissions

When I’m working in Ambari, sometimes I can’t manage files because whatever 
user I’m working under doesn’t have permission.

  1.  What account does Ambari use when it is interacting with the various 
other programs?
  2.  How do I need to set my permissions so that when I’m in Ambari as admin, 
I’m able to create and blow away things at will?

Adaryl "Bob" Wakefield, MBA
Principal
Mass Street Analytics, LLC
913.938.6685
www.massstreet.net
www.linkedin.com/in/bobwakefieldmba
Twitter: @BobLovesData



Re: Adding custom config properties programatically

2017-06-09 Thread Yusaku Sako
Are you using the Blueprints API?
That's the recommended approach if you want to programmatically install 
clusters.
With that you can set any configuration properties you would like.

https://cwiki.apache.org/confluence/display/AMBARI/Blueprints

Yusaku

From: Satyanarayana Jampa
Reply-To: "user@ambari.apache.org"
Date: Friday, June 9, 2017 at 10:39 AM
To: "user@ambari.apache.org"
Subject: Adding custom config properties programatically

Hi,
I would like to add “custom core-site” properties during 
installation programmatically.
Can you please help me out with the necessary info.

Say for ex:
Under HDFS TAB --> Custom core-site
hadoop.proxyuser.root.hosts=*
hadoop.proxyuser.root.groups=*

Thanks,
Satya.



Re: What's the state of Ambari dev/support?

2017-02-01 Thread Yusaku Sako
I think the lack of response to your specific questions is likely due to the 
fact that the community has not focused on / actively maintains Windows support.

Yusaku

From: jeff saremi
Reply-To: "user@ambari.apache.org"
Date: Wednesday, February 1, 2017 at 8:19 AM
To: "user@ambari.apache.org"
Subject: What's the state of Ambari dev/support?


I posted 3 emails and logged 4 bugs. I only got one answer back.
Is this project dormant or being actively worked on?


Re: Question on Windows Installation for Server/Agent

2017-02-01 Thread Yusaku Sako
Windows support (both build time / run time) is something that a group of folks 
worked on in the past (as you can see in the JIRA you referred), but it is not 
actively maintained.
Personally I don't know of anyone who is working on Windows at the moment, so 
you might be on your own here.  If others in the community are working on 
Windows
with success, please chime in.

Yusaku

From: jeff saremi
Reply-To: "user@ambari.apache.org"
Date: Monday, January 30, 2017 at 3:45 PM
To: "user@ambari.apache.org"
Subject: Question on Windows Installation for Server/Agent


I've been looking for any documentation or user posts on the possibility of 
installing Ambari in some shape of form on Windows.
The only thing that showd up was this link:

https://issues.apache.org/jira/browse/AMBARI-7504

Suggesting that there was feature request like that at some point in the past 
and that feature got implemented.
Is this true? Is there any official/unofficial write up on that?

thanks

Jeff


Re: Need pointers to build HA capabilty for a custom Stack in Ambari

2017-01-06 Thread Yusaku Sako
Hi Souvik,

Unfortunately, much of configuration / orchestration logic for supporting HA 
still live in Ambari Web.
Ideally this needs to be declarative / pluggable so that it lives in the 
stack/service definition, rather than be hardcoded into Ambari Web.

+dev list

Yusaku

From: Souvik Sarkhel
Reply-To: "user@ambari.apache.org"
Date: Friday, January 6, 2017 at 10:47 AM
To: "user@ambari.apache.org"
Subject: Re: Need pointers to build HA capabilty for a custom Stack in Ambari


Hi All,

I have developed a custom stack for Ambari which is working completely fine.
Now I need to add HA capability for a few components like 
Hadoop,ResourceManager,Tomcat etc.The problem is I can't use any of the 
existing components of HDP stack.
Thus I am looking for pointers from where I can start how HA can be implemented.
I have found some information like in the files located under 
ambari/ambari-web/app/controllers/main/admin/highAvailability/
 ,so can someone please tell me whether I am looking in the correct direction 
and if I am correct then the whole HA capability part lies in ambari-web 
project and not under ambari-server project?

Thanks,
Souvik Sarkhel

On Dec 28, 2016 10:10 PM, "Souvik Sarkhel" 
mailto:souvik.sark...@gmail.com>> wrote:
Hi All,

I have developed a custom stack for Ambari which is working completely fine.
Now I need to add HA capability for a few components like 
Hadoop,ResourceManager,Tomcat etc.The problem is I can't use any of the 
existing components of HDP stack.
Thus I am looking for pointers from where I can start how HA can be implemented.
I have found some information like in the files located under 
ambari/ambari-web/app/controllers/main/admin/highAvailability/
 ,so can someone please tell me whether I am looking in the correct direction 
and if I am correct then the whole HA capability part lies in ambari-web 
project and not under ambari-server project?

--
Souvik Sarkhel


Re: AsyncCallableService constantly failing

2017-01-06 Thread Yusaku Sako
Hi Cyril,

In the Ambari Server log, I'm not seeing anything regarding Ambari Agents 
registering themselves with the server.
Have you installed, configured, and started Ambari Agents on all the hosts, 
including the Ambari Server host?

Yusaku

From: Cyril Scetbon
Reply-To: "user@ambari.apache.org"
Date: Thursday, January 5, 2017 at 9:52 PM
To: "user@ambari.apache.org"
Subject: AsyncCallableService constantly failing

Hey guys,

I'm trying to install HDP components using Ambari 2.2.1 but it fails. If 
someone can tell me what's going on, what to fix, where to look, cause it's 
really hard to know what's going on :(

Here are the logs and the json files I used :

ambari server logs http://pastebin.com/P74HcVP7
hosts-map.json http://pastebin.com/dHyZujT1
multi-nodes-hdp.json http://pastebin.com/B4j1CSjZ

Nothing on Ambari agents.

Thank you


Apache Ambari Birds of a Feather (BoF) session, 6/30 Thurs, 5-7PM

2016-06-29 Thread Yusaku Sako
Hi all,

There will be an Apache Ambari BoF session tomorrow (6/30 Thurs) at 5-7PM PDT.
It will be at San Jose Convention Center in Ballroom B.
This is a chance for us in the community to get together and have an open 
discussion regarding future direction, process improvements, and anything 
related to the Apache Ambari project.
I hope you can join us!

Note that this is a free event and everyone is welcome.  You do *not* need to 
have a Hadoop Summit event pass to participate in this session.

http://www.meetup.com/Apache-Ambari-User-Group/events/232255401/

Yusaku


Re: Apache Ambari Meetup on June 27 (6PM-8PM PDT) at San Jose Convention Center

2016-06-27 Thread Yusaku Sako
Hi all,

Here's a reminder that the Apache Ambari Meetup is happening tonight!
Also, a WebEx session has been schedule for those who wish to join remotely.

http://www.meetup.com/Apache-Ambari-User-Group/events/231576067/

We'll be in Room LL20C at San Jose Convention Center.
The Meetup starts at 6PM sharp.

See you there!
Yusaku

From: Yusaku Sako
Date: Tuesday, June 21, 2016 at 3:37 PM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>", 
"d...@ambari.apache.org<mailto:d...@ambari.apache.org>"
Subject: Re: Apache Ambari Meetup on June 27 (6PM-8PM PDT) at San Jose 
Convention Center

Hi Ambari users and developers,

Here is a reminder that we'll be having an Ambari Meetup at San Jose Convention 
Center this coming Monday (June 27, 6PM-8PM PDT).

For the agenda / location / RSVP, please visit: 
http://www.meetup.com/Apache-Ambari-User-Group/events/231576067/

BTW, thanks to those who volunteered to give a talk at this event!

Hope to see you there!
Yusaku

From: Yusaku Sako
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>"
Date: Tuesday, June 7, 2016 at 5:55 PM
To: "d...@ambari.apache.org<mailto:d...@ambari.apache.org>", 
"user@ambari.apache.org<mailto:user@ambari.apache.org>"
Subject: Apache Ambari Meetup on June 27 (6PM-8PM PDT) at San Jose Convention 
Center

Hi everyone,

There will be an Apache Ambari Meetup on June 27 (Mon), 6PM-8PM PDT, at San 
Jose Convention Center [1].
Note that there's room for only 40-50 people or so.  Please sign up in advance 
to secure your spot!

This is a free event - you do not need a pass for Hadoop Summit, even though 
it's at the same venue.

We have 2 hours, so we can have a number of speakers based on how many are 
interested in presenting.
If you would like to present at this Meetup, please let me know directly or 
thru the dev mailing list.

Thanks, and hope to see many of you there!
Yusaku

[1] http://www.meetup.com/Apache-Ambari-User-Group/events/231576067/


Re: Apache Ambari Meetup on June 27 (6PM-8PM PDT) at San Jose Convention Center

2016-06-21 Thread Yusaku Sako
Hi Ambari users and developers,

Here is a reminder that we'll be having an Ambari Meetup at San Jose Convention 
Center this coming Monday (June 27, 6PM-8PM PDT).

For the agenda / location / RSVP, please visit: 
http://www.meetup.com/Apache-Ambari-User-Group/events/231576067/

BTW, thanks to those who volunteered to give a talk at this event!

Hope to see you there!
Yusaku

From: Yusaku Sako
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>"
Date: Tuesday, June 7, 2016 at 5:55 PM
To: "d...@ambari.apache.org<mailto:d...@ambari.apache.org>", 
"user@ambari.apache.org<mailto:user@ambari.apache.org>"
Subject: Apache Ambari Meetup on June 27 (6PM-8PM PDT) at San Jose Convention 
Center

Hi everyone,

There will be an Apache Ambari Meetup on June 27 (Mon), 6PM-8PM PDT, at San 
Jose Convention Center [1].
Note that there's room for only 40-50 people or so.  Please sign up in advance 
to secure your spot!

This is a free event - you do not need a pass for Hadoop Summit, even though 
it's at the same venue.

We have 2 hours, so we can have a number of speakers based on how many are 
interested in presenting.
If you would like to present at this Meetup, please let me know directly or 
thru the dev mailing list.

Thanks, and hope to see many of you there!
Yusaku

[1] http://www.meetup.com/Apache-Ambari-User-Group/events/231576067/


Apache Ambari Meetup on June 27 (6PM-8PM PDT) at San Jose Convention Center

2016-06-07 Thread Yusaku Sako
Hi everyone,

There will be an Apache Ambari Meetup on June 27 (Mon), 6PM-8PM PDT, at San 
Jose Convention Center [1].
Note that there's room for only 40-50 people or so.  Please sign up in advance 
to secure your spot!

This is a free event - you do not need a pass for Hadoop Summit, even though 
it's at the same venue.

We have 2 hours, so we can have a number of speakers based on how many are 
interested in presenting.
If you would like to present at this Meetup, please let me know directly or 
thru the dev mailing list.

Thanks, and hope to see many of you there!
Yusaku

[1] http://www.meetup.com/Apache-Ambari-User-Group/events/231576067/


[CVE-2016-0731] Apache Ambari: Ambari File Browser View security vulnerability

2016-05-16 Thread Yusaku Sako
CVE-2016-0731: Ambari File Browser View security vulnerability

Severity: Important

Vendor: The Apache Software Foundation

Versions Affected: 1.7.0 to 2.2.0

Versions Fixed: 2.2.1

Description: Ambari File Browser View, depending on how it is configured, 
allows an Ambari admin user to gain access to Ambari Server's local file system.

Mitigation: Ambari users should upgrade to versions 2.2.1 or above.

Reference: 
https://cwiki.apache.org/confluence/display/AMBARI/Ambari+Vulnerabilities



Re: Hosting package for installation

2016-02-12 Thread Yusaku Sako
Hi Priyanka,

Questions about how a project is accepted/released as part of HDP, etc, is a 
vendor-specific question and best handled outside of this mailing group.
I'll follow up with you personally.

Yusaku

From: priyanka gugale
Reply-To: "user@ambari.apache.org"
Date: Thursday, February 11, 2016 at 10:47 PM
To: "user@ambari.apache.org"
Subject: Re: Hosting package for installation

Sorry for bombardment of questions, but I have one more.
When a project is accepted to be a part of HDP release, and what is the process 
to follow to become part of HDP release?

-Priyanka

On Fri, Feb 12, 2016 at 1:26 AM, Jayush Luniya 
mailto:jlun...@hortonworks.com>> wrote:
For pig, hive, spark, storm packages are already part of HDP release, so no 
they are not custom services so to speak.

Thanks
Jayush

From: priyanka gugale 
mailto:priyanka.gug...@gmail.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Wednesday, February 10, 2016 at 9:37 PM

To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Re: Hosting package for installation

Hi Jayush,

Thanks for your reply. Are most of apache projects like pig, hive, spark, storm 
etc following same guideline? i.e. having their private repository and 
configuring it after Ambari is installed as the blog suggests?

-Priyanka

On Thu, Feb 11, 2016 at 6:20 AM, Jayush Luniya 
mailto:jlun...@hortonworks.com>> wrote:
You can create your own repository from HDP repository and add your artifacts 
to your private repository. You can then add your custom service to the stack 
definition.

Here are few resources that would help.

Building local repository
http://hortonworks.com/blog/how-to-use-local-repositories-apache-ambari/

Adding custom service to a stack definition
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=38571133

Thanks
Jayush

From: priyanka gugale mailto:pri...@apache.org>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Tuesday, February 9, 2016 at 10:28 PM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Re: Hosting package for installation

Basically I would like to understand,

1. Is there a way that we can push our package to HDP Stack repository?
2. If not, how to add third party repository to the service, so that package is 
downloaded from right place while service installation.
3. In case we don't have any package, can we write scripts to build the package 
and then use it for installation or follow some custom install process?

If more than one option is feasible which of these is the recommended way to 
follow?

-Priyanka

On Tue, Feb 9, 2016 at 6:57 PM, priyanka gugale 
mailto:pri...@apache.org>> wrote:
Hi,

We are planning to create Apache Apex package in Ambari. What should be the 
source of installer?

1. Write script to checkout latest code and build rpm/deb package and install 
it when someone tries to install the service?
2. Host the rpm/deb on some repository and write the installer script to just 
install hosted packages?
 If hosting the packages is right option, where can we host it? Can we push 
it to ambari public   repository? Or we do need to host it somewhere else and 
expect user to use our repository?

-Priyanka





[ANNOUNCE] Di Li as a new committer for Ambari

2015-12-15 Thread Yusaku Sako
Hi all,

It is my pleasure to announce that Di Li has become a committer for Ambari.

Di has been providing meaningful contributions in areas that tend to get 
overlooked
(usability, getting the details right, pluggability, etc.) in a full-stack 
manner to improve
various aspects of Ambari, including Ambari Agent, Ambari Web, and Ambari 
Server.

Congratulations, Di!

Yusaku


Re: HA without Kerberos?

2015-10-22 Thread Yusaku Sako
Stephen,

Ambari lets you set up NameNode HA without Kerberos.
What are the JournalNodes are complaining about?  Can you attach the logs of 
errors you are seeing, as well as your core-site and hdfs-site?

Yusaku

From: Stephen Boesch
Reply-To: "user@ambari.apache.org"
Date: Thursday, October 22, 2015 at 12:42 PM
To: "user@ambari.apache.org"
Subject: HA without Kerberos?


I was setting up HA NN w/o Kerberos and got pretty far but the JournalNode's 
are complaining. It seems they require kerberos: there is no setting to use  
"simple" authentication and they do not respect the hdfs-site.xml setting .

Is there any solution to this?

Otherwise:  why has  fault tolerance been conflated with security?  We can 
certainly imagine a cluster requiring fault tolerance that has been air gapped 
- and thus kerberos would not be required.

thanks

stephenb



[CVE-2015-5210] Unvalidated Redirects and Forwards using targetURI parameter can enable phishing exploits

2015-10-12 Thread Yusaku Sako
CVE-2015-5210: Unvalidated Redirects and Forwards using targetURI parameter can 
enable phishing exploits

Severity: Important

Vendor: The Apache Software Foundation

Versions Affected: 1.7.0 to 2.1.1

Versions Fixed: 2.1.2

Description: A redirect to an untrusted server is possible via unvalidated 
input that specifies a redirect URL upon successful login.

Mitigation: Ambari users should upgrade to version 2.1.2 or above. Version 
2.1.2 onwards redirect locations must be relative URLs.

References: 
https://cwiki.apache.org/confluence/display/AMBARI/Ambari+Vulnerabilities


[CVE-2015-3270] A non-administrative user can escalate themselves to have administrative privileges remotely

2015-10-12 Thread Yusaku Sako
CVE-2015-3270: A non-administrative user can escalate themselves to have 
administrative privileges remotely

Severity: Important

Vendor: The Apache Software Foundation

Versions Affected: 1.7.0, 2.0.0, 2.0.1, 2.1.0

Versions Fixed: 2.0.2, 2.1.1

Description: An authenticated user can remotely escalate his/her permissions to 
administrative level. This can escalate their privileges for access through the 
API as well from the UI.

Mitigation: Ambari users should upgrade to version 2.1.1 or above (2.0.0 and 
2.0.1 can be upgraded to 2.0.2).

In fixed versions of Ambari (2.0.2; 2.1.1 and onward), access to the user 
resource endpoint is protected such that only a user with administrator 
privileges can esculate a user's privileges. A user, however, may still access 
the endpoint but may only change their own password.

Credit: This issue was discovered by security analysts at Blue Cross Blue 
Shield Association


[CVE-2015-1775] Apache Ambari Server Side Request Forgery vulnerability

2015-10-12 Thread Yusaku Sako
CVE-2015-1775: Apache Ambari Server Side Request Forgery vulnerability

Severity: Important

Vendor: The Apache Software Foundation

Versions Affected: 1.5.0 to 2.0.2

Versions Fixed: 2.1.0

Description: Ambari exposes a proxy endpoint through “api/v1/proxy” that can be 
used make REST calls to arbitrary host:port that are accessible from the Ambari 
server host. Ability to make these calls is limited to Ambari authenticated 
users only. In addition, an user need to be Ambari admin user to make the REST 
calls using METHODs other than GET (non-admin users can only call GET). This 
ability to call allows malicious users to perform port scans and/or access 
unsecured services visible to the Ambari Server host through the proxy 
endpoint. In addition Ambari provides an utility to handle such proxy calls 
that are used by View instances hosted by Ambari

Mitigation: Ambari users should upgrade to version 2.1.0 or above. Version 
2.1.0 onwards the proxy end point (api/v1/proxy) has been disabled. In addition 
a configurable parameter (proxy.allowed.hostports) is introduced, in config 
file ambari.properties, to explicitly specify a list of host/port that can be 
proxied to when using the utility.

Credit: This issue was discovered by  Mateusz Olejarka (SecuRing).

References: 
https://cwiki.apache.org/confluence/display/AMBARI/Ambari+Vulnerabilities


Re: [CVE-2015-3186] Apache Ambari XSS vulnerability

2015-10-12 Thread Yusaku Sako
Adding the correct user@ambari.apache.org list.

Yusaku

From: Yusaku Sako
Date: Monday, October 12, 2015 at 6:34 PM
To: Mark Kerzner, Yosef Kerzner, 
"us...@ambari.apache.org<mailto:us...@ambari.apache.org>", 
"d...@ambari.apache.org<mailto:d...@ambari.apache.org>", 
"secur...@apache.org<mailto:secur...@apache.org>", 
"oss-secur...@lists.openwall.com<mailto:oss-secur...@lists.openwall.com>", 
"bugt...@securityfocus.com<mailto:bugt...@securityfocus.com>"
Subject: [CVE-2015-3186] Apache Ambari XSS vulnerability


CVE-2015-3186: Apache Ambari XSS vulnerability

Severity: Important

Vendor: The Apache Software Foundation

Versions Affected: 1.7.0 to 2.0.2

Versions Fixed: 2.1.0

Description: Ambari allows authenticated cluster operator users to specify 
arbitrary text as a note when saving configuration changes. This note field is 
rendered as is (unescaped HTML).  This exposes opportunities for XSS.

Mitigation: Ambari users should upgrade to version 2.1.0 or above.

Version 2.1.0 onwards properly HTML-escapes the note field associated with 
configuration changes.

Credit: Hacker Y on the Elephant Scale team.

References: 
https://cwiki.apache.org/confluence/display/AMBARI/Ambari+Vulnerabilities



Re: Ambari memory leak?

2015-10-02 Thread Yusaku Sako
Andrew,

Do you want to go ahead and file a bug in JIRA?


I’m just speculating, but this might be related?
https://issues.apache.org/jira/browse/AMBARI-11349

We saw logs growing geometrically, and I think the change was to simply 
suppress log messages by changing 
the log level.  So I’m wondering if we have some memory leakage because of 
that.  I could be totally wrong.

Yusaku



On 9/29/15, 12:34 PM, "Andrew Robertson"  wrote:

>In Ambari 2.1.1, the ambari-agent on two of my hosts occasionally
>quietly dies without any messages going to the log file or to stdout.
>I've also noticed that the memory usage in ambari_agent seems to creep
>up over time, and I suspect the crashes are related to this.  Here's
>the snapshot from ps aux a few hours before the ambari agent process
>died quietly:
>
>$ ps aux | grep ambari_agent
>root  3759 25.8 36.2 27152176 23872968 ?   Sl   Sep15 4708:55
>/usr/bin/python2.6
>/usr/lib/python2.6/site-packages/ambari_agent/main.py start
>
>(ambari_agent was at 25% cpu usage, 27GB of memory).
>
>This happens to be only affecting 2 hosts that I have; each have a
>number of master services (mostly Namenode, ResourceManager,
>HiveServer2). On my other machine with the same set of master
>services, ambari_agent was restarted a few days ago and is already up
>to 8gb of memory. On my machines without the master services - just
>datanodes / nodemanagers / etc - ambari is using ~1.7gb of memory
>(VSZ) and has been stable since I last upgraded Ambari in late August.
>
>I don't recall if this was happening in 2.1.0, or if it started in
>2.1.1. I didn't have 2.1.0 deployed for very long.  It wasn't
>happening in 2.0 - though I've also deployed Kerberos since then.
>
>Is this a known issue or has anyone else seen this?
>


Re: Ambari 2.1.0 on HDP 2.3 problem deploying HDFS

2015-07-31 Thread Yusaku Sako
It seems like this is happening because the default directory recommendations 
given contains /home as a mount point.
What is the output of:

GET /api/v1/clusters//hosts?fields=Hosts/disk_info/*

Yusaku

From: Ian Soboroff
Reply-To: "user@ambari.apache.org"
Date: Friday, July 31, 2015 at 1:21 PM
To: "user@ambari.apache.org"
Subject: Re: Ambari 2.1.0 on HDP 2.3 problem deploying HDFS

I got frustrated enough that I cleared everything down to a naked Postgres 
install, and put everything back step-by-step.  I still get this error.  I 
repeat, I haven't given any values to the NameNode directories config, because 
I don't get an opportunity to do.

Reloading the page or going Back then Next do not solve the problem for me.

Ian

On Jul 31, 2015, at 2:44 PM, Srimanth Gunturi 
mailto:sgunt...@hortonworks.com>> wrote:


Hi Ian,

I was able to reproduce issue by jumping between other service configs *after* 
setting the value to "/home/foo".

I have opened https://issues.apache.org/jira/browse/AMBARI-12612​ to fix this 
issue.


The workaround is to either hit Back followed by Next. OR just refresh the page 
to get back the original config value.

Regards,

Srimanth




From: Srimanth Gunturi 
mailto:sgunt...@hortonworks.com>>
Sent: Friday, July 31, 2015 11:10 AM
To: user@ambari.apache.org
Subject: Re: Ambari 2.1.0 on HDP 2.3 problem deploying HDFS


Hi Ian,

Which version of Ambari are you using to install HDP-2.3?

Can you provide the output of 'ambari-server --hash' command? Also, do you see 
any Javascript exceptions?


I have just attempted to reproduce the issue by adding HDFS service, but do not 
see the issue (screenshot attached).

If you are not entering any /home value, and Ambari is not providing it by 
default, I am trying to figure out where it is coming from.






What output do you see when you go to following URL in browser?

http://[ambari-server]:8080/api/v1/clusters/[cluster-name]/configurations?type=hdfs-site

- Replace [ambari-server] with your Ambari server hostname

- Replace [cluster-name] with your cluster ID

- Make sure you are logged in

Regards,

Srimanth




From: Ian Soboroff mailto:ian.sobor...@nist.gov>>
Sent: Friday, July 31, 2015 10:48 AM
To: user@ambari.apache.org
Subject: Re: Ambari 2.1.0 on HDP 2.3 problem deploying HDFS

I haven't set the value of 'NameNode directories' to anything.  The interface 
doesn't let me.
Ian

On Jul 31, 2015, at 1:35 PM, Srimanth Gunturi 
mailto:sgunt...@hortonworks.com>> wrote:


Hi Ian,

Did you set the value of any of the 'NameNode directories' to begin with /home 
or /homes?

We show that error when it does - 
https://github.com/apache/ambari/blob/trunk/ambari-web/app/utils/validator.js#L96
 (screenshot attached)


What is strange in your case is that it does not show the text area so that you 
correct it.

Can you detail the exact actions that resulted in this outcome?

Alternatively, do you see any Javascript errors in the Javascript console, and 
have you tried it in other browsers?

Regards,

Srimanth






From: Ian Soboroff mailto:ian.sobor...@nist.gov>>
Sent: Friday, July 31, 2015 8:38 AM
To: user@ambari.apache.org
Subject: Ambari 2.1.0 on HDP 2.3 problem deploying HDFS

I'm installing HDP 2.3 on a new cluster, and taking it slow.  I'm using Ambari 
to deploy, and so far have only installed Zookeeper and Ambari-metrics.

When I go to install HDFS, I get to service customization, and it won't let me 
set a NameNode directory.  I've attached a screenshot.  When I hover the mouse 
over the 'NameNode directories' alert, the mouseover text says "Can't start 
with home(s)".  At this point there is no way to proceed with the installation.

Help!

Ian






Re: Enabling namenode HA fails when non default ports are used

2015-07-08 Thread Yusaku Sako
Hi Juanjo,

This is a bug that should be fixed.  Thanks for pointing this out.
Would you like to go ahead and file a JIRA for this?

Thanks,
Yusaku

From: Juanjo Marron mailto:juanjo.mar...@yahoo.com>>
Reply-To: Ambari User mailto:user@ambari.apache.org>>, 
Juanjo Marron mailto:juanjo.mar...@yahoo.com>>
Date: Thursday, July 2, 2015 11:56 AM
To: Ambari User mailto:user@ambari.apache.org>>
Subject: Enabling namenode HA fails when non default ports are used

Hi all,

I have a question related with High Availability configuration when changing 
default ports

I checked the source code for the Step 3 in namenode HA wizard 
(ambari-web/app/controllers/main/admin/highAvailability/nameNode/step3_controller.js)

The node address is looked from the configuration but the port is hardcoded ex:
...
this.setConfigInitialValue(config,zooKeeperHosts[0] + ':2181,' + 
zooKeeperHosts[1] + ':2181,' + zooKeeperHosts[2] + ':2181');
...

Is there any reason to not looking actual port number and use the default value?

Thanks for your answer,

Regards,

Juanjo Marron




Re: setenfoce disabled while starting services

2015-07-06 Thread Yusaku Sako
Hi,

What version of Ambari are you using?
I think you are hitting https://issues.apache.org/jira/browse/AMBARI-10841
This was fixed in 2.0.1.

Yusaku


From: Satyanarayana Jampa mailto:sja...@innominds.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Monday, July 6, 2015 12:01 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: setenfoce disabled while starting services

Hi,
I see the below error while, while trying the start the amabri 
services from Ambari2.0

Fail: Execution of 'setenforce 0' returned 1. setenforce: SELinux is disabled
Thanks,
Satya.



Re: ambari trunk / cluster wizard / javascript error

2015-07-01 Thread Yusaku Sako
Hi Donald,

Are you still having this issue on the latest trunk available now?

Yusaku

From: Donald Hansen mailto:don...@hansenfamily.us>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Tuesday, June 30, 2015 10:42 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: ambari trunk / cluster wizard / javascript error

Not sure if this is a new bug or if I just did something wrong. I was setting 
up a new virtual cluster on my local machine and used the trunk repo for my 
install. While going through the Cluster Wizard I got to the Customize Services 
step then got a javascript error and a spinner that will not stop spinning.

Uncaught TypeError: Cannot read property 'hosts' of undefined
app.js:154312

Has anyone else got this?


Re: Ambari data corruption/recovery process

2015-06-26 Thread Yusaku Sako
Yes, if you are talking about corruption, then you would need snapshots to go 
back to.
Recovery would be simpler if the Ambari Server hostname does not change (IP 
address changes should not matter).

One more step that I forgot to mention...  you would need to delete 
/var/lib/ambari-agent/keys/* from each agent before restarting it.

Yusaku

From: Clark Breyman mailto:cl...@breyman.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Friday, June 26, 2015 5:22 PM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Re: Ambari data corruption/recovery process

Thanks Yusaku for the quick response.

For our production systems, we're planning on using Postgres replication to 
ensure backups, though that doesn't defend against data corruption. Perhaps 
snapshots will be required.
Is there any documentation on restoring to a newly provisioned host? Is there 
any reason to use an DNS A record instead of a CNAME alias to simplify the 
recovery process?


On Fri, Jun 26, 2015 at 5:14 PM, Yusaku Sako 
mailto:yus...@hortonworks.com>> wrote:
Ambari DB should be backed up on a regular basis.  This is the most important 
piece of information.
It is also advisable to also back up 
/etc/ambari-server/conf/ambari-server.properties.
If you have these two, you can restore Ambari Server back to a running 
condition on a different host.
If the hostname of the Ambari Server changes, then you would have to update 
/etc/ambari-agent/conf/ambari-agent.ini to point to the new Ambari Server 
hostname and restart the agent.

Yusaku

From: Clark Breyman mailto:cl...@breyman.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Friday, June 26, 2015 5:10 PM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Ambari data corruption/recovery process

I'm wondering if anyone can share pointers/procedures/best practices to handle 
the scenarios where:

a) The sql database becomes corrupt. (Bugs, ...)
b) The Ambari service host is lost (e.g. EC2 instance termination, physical 
hardware loss, ...)




Re: Ambari data corruption/recovery process

2015-06-26 Thread Yusaku Sako
Ambari DB should be backed up on a regular basis.  This is the most important 
piece of information.
It is also advisable to also back up 
/etc/ambari-server/conf/ambari-server.properties.
If you have these two, you can restore Ambari Server back to a running 
condition on a different host.
If the hostname of the Ambari Server changes, then you would have to update 
/etc/ambari-agent/conf/ambari-agent.ini to point to the new Ambari Server 
hostname and restart the agent.

Yusaku

From: Clark Breyman mailto:cl...@breyman.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Friday, June 26, 2015 5:10 PM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Ambari data corruption/recovery process

I'm wondering if anyone can share pointers/procedures/best practices to handle 
the scenarios where:

a) The sql database becomes corrupt. (Bugs, ...)
b) The Ambari service host is lost (e.g. EC2 instance termination, physical 
hardware loss, ...)



Re: aborted vagrant

2015-06-23 Thread Yusaku Sako
Perhaps iptables is turned on?

Try "service iptables stop"

Yusaku

From: Donald Hansen mailto:don...@hansenfamily.us>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Tuesday, June 23, 2015 9:38 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: aborted vagrant

I've been developing with Ambari locally using Vagrant and I usually suspend my 
images before closing up my laptop. I forgot last night and my machines went to 
aborted state. I got them resumed but now when going to the UI 
(http://c6401.ambari.apache.org:8080/) I get ERR_CONNECTION_REFUSED.

If I SSH into the box and run curl on that URL, it works fine.

Anyone have any ideas what needs done to get this back? I can blow them away 
and re set everything up again if I need to but was hoping to avoid that if 
possible.

Thanks.
Donald


Re: Adding Hosts to Existing Cluster | Ambari 1.7.0

2015-06-16 Thread Yusaku Sako
Pratik,

You can change the form data to something like this:

{"RequestInfo":{"context":"Start 
MyComponent"},"Body":{"HostRoles":{"state":"STARTED"}}}

The Background Ops dialog in the UI displays the "RequestInfo/context" value 
that was passed when the API call is made.
I hope this helps.

Yusaku



From: Pratik Gadiya 
mailto:pratik_gad...@persistent.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Tuesday, June 16, 2015 12:01 PM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: FW: Adding Hosts to Existing Cluster | Ambari 1.7.0


Hello All,

I was able to add a host to the existing hortonworks cluster by following the 
steps mentioned in the link 
https://cwiki.apache.org/confluence/display/AMBARI/Add+a+host+and+deploy+components+using+APIs
However, I need some pointers to resolve one of my problem which is mentioned 
as below,

When I try to execute following command on a already exisitin hadoop cluster

curl --user admin:admin -i -X PUT -d '{"HostRoles": {"state": "STARTED"}}' 
http://AMBARI_SERVER_HOST:8080/api/v1/clusters/CLUSTER_NAME/hosts/NEW_HOST_ADDED/host_components/http://AMBARI_SERVER_HOST:8080/api/v1/clusters/CLUSTER_NAME/hosts/NEW_HOST_ADDED/host_components/%3cSERVICE_NAME>>

On the ambari UI at the top left corner there is a notification saying that 1 
operation is in progress and when I click on that it, it displays “Request name 
not specified” as the message body however, the deployment progress bar is 
working.

Can we change this to something like a message body which we pass while 
executing the above statement itself, so that we can get that message when we 
click on 1 ops in ambari UI (instead of seeing Request name not specified )

Thanks,
Pratik
From: Jeff Sposetti [mailto:j...@hortonworks.com]
Sent: Sunday, May 17, 2015 9:17 PM
To: user@ambari.apache.org<mailto:user@ambari.apache.org>
Subject: Re: Adding Hosts to Existing Cluster | Ambari 1.7.0

Hi,

Posted a comment to https://issues.apache.org/jira/browse/AMBARI-8458 that 
includes a brief (simple) example.

Note: this is for Ambari 2.0.0. Apologies if I didn’t highlight that earlier. 
If you sticking with Ambari 1.7 and not upgrading to 2.0, then you will have to 
use the other API methods described. But once you get to Ambari 2.0, this API 
becomes an option.

Hope this helps.

Jeff

From: Pratik Gadiya 
mailto:pratik_gad...@persistent.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Sunday, May 17, 2015 at 10:57 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: RE: Adding Hosts to Existing Cluster | Ambari 1.7.0


I think I will stick to the approach mentioned in 
https://issues.apache.org/jira/browse/AMBARI-8458 . This approach seems to be 
pretty easy to use.


Can someone help me out on the same ?

Please see the below mail conversions with Jeff for detail.

Help much appreciated !!

~Pratik


From: Yusaku Sako [mailto:yus...@hortonworks.com]
Sent: Sunday, May 17, 2015 6:32 PM
To: user@ambari.apache.org<mailto:user@ambari.apache.org>
Subject: Re: Adding Hosts to Existing Cluster | Ambari 1.7.0

I think others can help you with the blueprint-style add host call.
In the meantime, you should also look at 
https://cwiki.apache.org/confluence/display/AMBARI/Bulk+install+components+on+selected+hosts

Thanks,
Yusaku

From: Pratik Gadiya 
mailto:pratik_gad...@persistent.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Sunday, May 17, 2015 4:22 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: RE: Adding Hosts to Existing Cluster | Ambari 1.7.0

Jeff,

I had a look on the link which you had provided, however I am not sure why it 
didn’t worked for me.

Below is the command which I tried,
Command:
curl --user admin:admin -H "X-Requested-By: ambari" -i -X POST -d 
'{"blueprint_name": "mymasterblueprint", "host_group": "compute"}' 
https://XX.XX.XX.XX:8443/api/v1/clusters/CLUSTER/hosts/vmkdev0027.persistent.com

Response:
HTTP/1.1 400 Bad Request
Set-Cookie: AMBARISESSIONID=15w0nek4yww411pi8iqy70c8u5;Path=/
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Content-Length: 160
Server: Jetty(7.6.7.v20120910)

{
  "status" : 400,
  "message" : "The properties [blueprint, host_group] specified in the request 
or predicate are not supported for the resource type Host."
}

Please let me know if I have mi

Re: ambari user management

2015-06-12 Thread Yusaku Sako
Ah, you are right, John.

Please try the following call:

curl -i -uadmin:admin -H "X-Requested-By: ambari" -X PUT -d 
'{"Users/password":"mysecret","Users/old_password":"admin"}}'
http://localhost:8080/api/v1/users/


I hope this helps.

Yusaku

From: John Lane 
mailto:john.lane...@googlemail.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Tuesday, June 9, 2015 5:26 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: ambari user management


Hi,

It seems that theprocedure described below in the user mailing list March 2014 
no longer works (with Ambari 1.7), is there a recommended alternative?

***

No, admins cannot change user passwords via configs.sh; configs.sh is a
wrapper that uses the API to manage "configuration" objects that do not
deal with user passwords.
However, admins can change passwords directly via the API (or with a
similar wrapper script).
Here's an example:

curl -i -uadmin:admin -H "X-Requested-By: ambari" -X PUT -d
'{"Users":{"roles":"admin,user","password":"mysecret","old_password":"admin"}}'
http://localhost:8080/api/v1/users/

where:
* "roles" is a comma-delimited list of roles that the user should belong to
"admin,user" for admin users; just "user" for non-admin users.
* "password" is the new password to set for the user
* "old_password" is misleading, but* it's the password of the admin user
invoking this call*.  If you omit this parameter, the API call seems to go
thru, but the password does not actually change.  This is a bit redundant
and confusing, but that's how it works today...

I hope this helps!

Yusaku




Regards


Re: Ambari meetup on June 8 (Mon), 7:30-8:30pm at San Jose Convention Center

2015-06-08 Thread Yusaku Sako
Hi everyone,

This is a reminder that there will be an Ambari meetup tonight!
Hope to see you there.

Yusaku

From: Yusaku Sako mailto:yus...@hortonworks.com>>
Date: Thursday, June 4, 2015 8:49 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>, 
"d...@ambari.apache.org<mailto:d...@ambari.apache.org>" 
mailto:d...@ambari.apache.org>>
Subject: Ambari meetup on June 8 (Mon), 7:30-8:30pm at San Jose Convention 
Center

Hi all,

There will be an Ambari meetup on June 8, 7:30pm-8:30pm at San Jose Convention 
Center:
http://www.meetup.com/HSquared-Hadoop-Hortonworks-User-Group/events/222468662/

Everyone is welcome/encouraged to attend.  It is free; you do not have to have 
a pass for the Hadoop Summit event to attend this.
If you are planning on attending, please RSVP from the Meetup site above.

Thanks and see you there!
Yusaku


Ambari meetup on June 8 (Mon), 7:30-8:30pm at San Jose Convention Center

2015-06-04 Thread Yusaku Sako
Hi all,

There will be an Ambari meetup on June 8, 7:30pm-8:30pm at San Jose Convention 
Center:
http://www.meetup.com/HSquared-Hadoop-Hortonworks-User-Group/events/222468662/

Everyone is welcome/encouraged to attend.  It is free; you do not have to have 
a pass for the Hadoop Summit event to attend this.
If you are planning on attending, please RSVP from the Meetup site above.

Thanks and see you there!
Yusaku


Re: custom services / status

2015-06-03 Thread Yusaku Sako
Have you implemented the "status" command for the component(s) in your custom 
service?  
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=38571133
For most components, the status is based on the PID file.
You can look at some examples in the common-services directory: 
https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/common-services/AMBARI_METRICS/0.1.0/package/scripts/hbase_regionserver.py#L55-L59

Yusaku

From: Donald Hansen mailto:don...@hansenfamily.us>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Wednesday, June 3, 2015 3:39 PM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: custom services / status

I'm trying to create a custom service in Ambari and curious how it tells Ambari 
if the service successfully started or not. I was able to add a python function 
that starts my service and my service does start correctly but Ambari still 
shows the service as not started.

Thanks.
Donald


Re: teragen terasort fais for more than 5GB of data

2015-06-03 Thread Yusaku Sako
HI Pratik,

Looks like you are running out of disk space.

15/05/27 06:24:46 INFO mapreduce.Job: Task Id : 
attempt_1432720271082_0005_m_41_0, Status : FAILED
FSError: java.io.IOException: No space left on device
15/05/27 06:24:48 INFO mapreduce.Job: Task Id : 
attempt_1432720271082_0005_m_46_0, Status : FAILED
FSError: java.io.IOException: No space left on device

Yusaku

From: Pratik Gadiya 
mailto:pratik_gad...@persistent.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Wednesday, June 3, 2015 4:21 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: teragen terasort fais for more than 5GB of data

Hi All,

When I run teragen-terasort test on my hadoop deployed cluster, I get following 
error

15/05/27 06:24:36 INFO mapreduce.Job: map 57% reduce 18%
15/05/27 06:24:39 INFO mapreduce.Job: Task Id : 
attempt_1432720271082_0005_r_00_0, Status : FAILED
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in 
shuffle in InMemoryMerger - Thread to merge in-memory shuffled map-outputs
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not 
find any valid local directory for 
output/attempt_1432720271082_0005_r_00_0/map_38.out
at 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:402)
at 
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
at 
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
at 
org.apache.hadoop.mapred.YarnOutputFiles.getInputFileForWrite(YarnOutputFiles.java:213)
at 
org.apache.hadoop.mapreduce.task.reduce.MergeManagerImpl$InMemoryMerger.merge(MergeManagerImpl.java:457)
at org.apache.hadoop.mapreduce.task.reduce.MergeThread.run(MergeThread.java:94)

15/05/27 06:24:40 INFO mapreduce.Job: map 57% reduce 0%
15/05/27 06:24:46 INFO mapreduce.Job: Task Id : 
attempt_1432720271082_0005_m_41_0, Status : FAILED
FSError: java.io.IOException: No space left on device
15/05/27 06:24:48 INFO mapreduce.Job: Task Id : 
attempt_1432720271082_0005_m_46_0, Status : FAILED
FSError: java.io.IOException: No space left on device
15/05/27 06:24:49 INFO mapreduce.Job: Task Id : 
attempt_1432720271082_0005_m_44_0, Status : FAILED
Error: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find 
any valid local directory for attempt_1432720271082_0005_m_44_0_spill_0.out
at 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:402)
at 
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
at 
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
at 
org.apache.hadoop.mapred.YarnOutputFiles.getSpillFileForWrite(YarnOutputFiles.java:159)
at 
org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1584)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1482)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:720)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:790)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

15/05/27 06:24:50 INFO mapreduce.Job: Task Id : 
attempt_1432720271082_0005_m_45_0, Status : FAILED
Error: org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find 
any valid local directory for attempt_1432720271082_0005_m_45_0_spill_0.out
at 
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:402)
at 
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
at 
org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
at 
org.apache.hadoop.mapred.YarnOutputFiles.getSpillFileForWrite(YarnOutputFiles.java:159)
at 
org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1584)
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1482)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:720)
at org.apache.hadoop.mapred.

Re: Hive Metastore Service Startup Fails

2015-06-03 Thread Yusaku Sako
Hi Pratik,

In the Blueprint, try using a password without the ampersand "&" character.
I believe it's not working because Ambari is executing the following command:

export HIVE_CONF_DIR=/etc/hive/conf.server ; 
/usr/hdp/current/hive-client/bin/schematool -initSchema -dbType mysql -userName 
hive -passWord tkdw1rN&

& has a special meaning in the shell.
So essentially, Ambari invoking it like this and using an incorrect password:

export HIVE_CONF_DIR=/etc/hive/conf.server ; 
/usr/hdp/current/hive-client/bin/schematool -initSchema -dbType mysql -userName 
hive -passWord tkdw1rN &

Ambari should be escaping such special characters, but it doesn't look like 
it's escaping properly.

I hope this helps.
Yusaku

From: Pratik Gadiya 
mailto:pratik_gad...@persistent.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Wednesday, June 3, 2015 12:48 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: RE: Hive Metastore Service Startup Fails

Any Help Appreciated !!

From: Pratik Gadiya [mailto:pratik_gad...@persistent.com]
Sent: Monday, June 01, 2015 5:21 PM
To: user@ambari.apache.org
Subject: FW: Hive Metastore Service Startup Fails
Importance: High

Hello All,

When I try to deploy hortonworks cluster using ambari blueprint APIs, it 
results in failure while starting up of Hive Metastore service.

The same blueprint most of the times works appropriately on the same 
environment.

The parameter which gets changed in the entire blueprint w.r.t hive is,

Host Mapping File Content:
{'blueprint': 'onemasterblueprint',
'configurations': [{u'hive-env': {u'hive_metastore_user_passwd': 'tkdw1rN&'}},
{u'gateway-site': {u'gateway.port': u'8445'}},
{u'nagios-env': {u'nagios_contact': 
u'a...@us.ibm.com'}},
{u'hive-site': {u'javax.jdo.option.ConnectionPassword': 
'tkdw1rN&'}},
{'hdfs-site': {'dfs.datanode.data.dir': 
'/disk1/hadoop/hdfs/data,/disk2/hadoop/hdfs/data',
   'dfs.namenode.checkpoint.dir': 
'/disk1/hadoop/hdfs/namesecondary',
   'dfs.namenode.name.dir': 
'/disk1/hadoop/hdfs/namenode'}},
{'core-site': {'fs.swift.impl': 
'org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem',
   'fs.swift.service.softlayer.auth.url': 
'https://dal05.objectstorage.service.networklayer.com/auth/v1.0',
   
'fs.swift.service.softlayer.connect.timeout': '12',
   'fs.swift.service.softlayer.public': 'false',
   'fs.swift.service.softlayer.use.encryption': 
'true',
   'fs.swift.service.softlayer.use.get.auth': 
'true'}}],
'default_password': 'tkdw1rN&',
'host_groups': [{'hosts': [{'fqdn': 'vmktest0003.test.analytics.com'}],
  'name': 'master'},
 {'hosts': [{'fqdn': 'vmktest0004.test.analytics.com'}],
  'name': 'compute'}]}

Error.txt:
2015-06-01 05:59:22,178 - Error while executing command 'start':
Traceback (most recent call last):
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 123, in execute
method(env)
  File 
"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_metastore.py",
 line 43, in start
self.configure(env) # FOR SECURITY
  File 
"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_metastore.py",
 line 38, in configure
hive(name='metastore')
  File 
"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive.py",
 line 97, in hive
not_if = check_schema_created_cmd
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
line 148, in __init__
self.env.run()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 149, in run
self.run_action(resource, action)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 115, in run_action
provider_action()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
 line 241, in action_run
raise ex
Fail: Execution of 'export HIVE_CONF_DIR=/etc/hive/conf.server ; 
/usr/hdp/current/hive-client/bin/schematool -initSchema -dbType mysql -userName 
hive -passWord [PROTECTED]' returned 1. 15/06/01 05:59:21 WARN conf.HiveConf: 
HiveConf of name hive.optimize.mapjoin.mapreduce does not exist
15/06/01 05:59:21 WARN conf.HiveConf: HiveConf of name hive.heapsize does not 
exist
15/06/01 05:59:21 WARN conf.HiveConf: HiveConf of name 
hive.server2.enable.impersonation does not exist
15/06/01 05:59:21 WARN conf.HiveConf: HiveConf of name 
hive.auto.convert.sortmerge.join.noconditionaltask does not exist
Metastore conne

Re: Ambari 2.0 - Namenode Moved - ConnectionRefused Error

2015-05-18 Thread Yusaku Sako
Hi Daniel,

You might need to run Hive Metatool to update the pointer to the new NameNode: 
https://cwiki.apache.org/confluence/display/Hive/Hive+MetaTool

hive --service metatool -listFSRoot <- shows the current pointer to NN
hive --service metatool -updateLocation<- updates the 
pointer to NN

I hope this helps.

Yusaku

From: Daniel Klinger mailto:d...@web-computing.de>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Monday, May 18, 2015 1:10 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Ambari 2.0 - Namenode Moved - ConnectionRefused Error

Hi all,

ich moved my Namenode to a new Node with Ambari 2.0. The moving itselve worked 
fine but now I'm getting a ConnectionRefused-Error an HBase and Hive (if i run 
a HiveQuery).

Error occurred executing hive query: Error while compiling statement: FAILED: 
SemanticException MetaException(message:java.net.ConnectException: Call >From 
node0.test.local/192.168.1.173 to node1.test.local:8020 failed on connection 
exception: java.net.ConnectException: Connection refused; For more details see: 
http://wiki.apache.org/hadoop/ConnectionRefused

Node1 is my old Namenode. Somewehre there is still node1 as Namnode listed. But 
i cant find it. I searched all Config-xml-files there is no entry regarding 
node1. Where is the mistake?

Greetz
dk


Re: Move History Server

2015-05-18 Thread Yusaku Sako
Cool, I'm glad you were able to make it work, and thanks for the detailed steps.

Yusaku

From: João Alves mailto:j...@5dlab.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Monday, May 18, 2015 3:43 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Re: Move History Server

Hey Yusaku,

Thank you for your answer. With your help I was able to move it. I will just 
add here some modifications to your instructions so it can help others.

4 - This command did not work for me but the following did:
curl -i -u USER:PASS -H 'X-Requested-By: ambari' -X POST 
-d'{"HostRoles":{"component_name":"HISTORYSERVER"}}' 
http://AMBARI_SERVER_HOST:8080/api/v1/clusters/CLUSTER_NAME/hosts/TARGET_HOSTNAME/host_components

5 - I also needed to update some configs on the MapReduce2 service in Ambari. 
The configs were the following: mapreduce.jobhistory.address and 
mapreduce.jobhistory.webapp.address

Best Regards,
João Alves

On 15 May 2015, at 01:22, Yusaku Sako 
mailto:yus...@hortonworks.com>> wrote:

Hi,

Unfortunately, History Server cannot be moved from the UI at the moment.
However you can do the following:

1. Take a backup of your Ambari database
2. Stop History Server from the UI and wait until it is stopped.
3. Delete History Server from the source host via the API:
curl -i -u admin:admin -H 'X-Requested-By: ambari' -X DELETE
http://AMBARI_SERVER_HOST:8080/api/v1/clusters/CLUSTERNAME/hosts/HOSTNAME/h
ost_components/HISTORYSERVER
4. Add History Server to the target host via the API:
curl -i -u admin:admin -H 'X-Requested-By: ambari' -X POST -d
'{"host_components" : [{"HostRoles":{"component_name":"HISTORYSERVER"}}]
}'
http://AMBARI_SERVER_HOST:8080/api/v1/clusters/CLUSTER_NAME/hosts/TARGET_HO
STNAME

5. Go to the UI to install and start History Server.

I hope this helps.
Yusaku


On 5/13/15 8:29 AM, "João Alves" mailto:j...@5dlab.com>> wrote:

Hey all,

I have a cluster with HDP 2.1.7 stack which I recently upgraded from
Ambari 1.6.1 to 2.0.0. I have a computer in my cluster that I need to
decommission.

I have been moving away all the master components from this machine. The
last remaining is the History Server however it seems that there is no
option to move it.

I searched around and did not found any resources on how to do this.
Could someone give me any pointers on how to move this component to
another machine?

Thanks for your help,
João Alves




Re: co-locate tag in metainfo.xml file for custom service not working as expected.

2015-05-18 Thread Yusaku Sako
Hi Christopher,

How are you creating your cluster?  Is it via UI or Blueprint API or
granular API?

Yusaku

On 5/18/15 1:28 PM, "Christopher Jackson"
 wrote:

>Hi All,
>
>I have created a custom Ambari Service with a master and client
>component. I have the requirement that my master component be installed
>on the same node that has an instance of the OOZIE/OOZIE_SERVER
>component. I have tried to force this by including the following snippet
>in my services master component configuration as part of the
>meta-info.xml file. However when I go to create a new cluster, my master
>component is placed on a node that doesn¹t have OOZIE_SERVER component.
>
>
>   OOZIE/OOZIE_SERVER
>   host
>   
>   true
>   MY_SERVICE/MY_MASTER_COMPONENT
>   
>
>
>
>Is this not the correct way to enforce such a requirement? Or am I
>missing some additional configuration?
>
>Thanks in advance,
>
>Christopher Jackson



Re: Adding Hosts to Existing Cluster | Ambari 1.7.0

2015-05-17 Thread Yusaku Sako
I think others can help you with the blueprint-style add host call.
In the meantime, you should also look at 
https://cwiki.apache.org/confluence/display/AMBARI/Bulk+install+components+on+selected+hosts

Thanks,
Yusaku

From: Pratik Gadiya 
mailto:pratik_gad...@persistent.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Sunday, May 17, 2015 4:22 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: RE: Adding Hosts to Existing Cluster | Ambari 1.7.0

Jeff,

I had a look on the link which you had provided, however I am not sure why it 
didn't worked for me.

Below is the command which I tried,
Command:
curl --user admin:admin -H "X-Requested-By: ambari" -i -X POST -d 
'{"blueprint_name": "mymasterblueprint", "host_group": "compute"}' 
https://XX.XX.XX.XX:8443/api/v1/clusters/CLUSTER/hosts/vmkdev0027.persistent.com

Response:
HTTP/1.1 400 Bad Request
Set-Cookie: AMBARISESSIONID=15w0nek4yww411pi8iqy70c8u5;Path=/
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Content-Length: 160
Server: Jetty(7.6.7.v20120910)

{
  "status" : 400,
  "message" : "The properties [blueprint, host_group] specified in the request 
or predicate are not supported for the resource type Host."
}

Please let me know if I have missed something.

Note :
vmkdev0027.persistent.com -  Is the host which I need to add,
CLUSTER - Specified in the URL is my cluster name


Thanks & Regards,
Pratik

From: Jeff Sposetti [mailto:j...@hortonworks.com]
Sent: Sunday, May 17, 2015 1:15 PM
To: user@ambari.apache.org
Subject: Re: Adding Hosts to Existing Cluster | Ambari 1.7.0


Have you looked at using Blueprints API for "add host"?



https://issues.apache.org/jira/browse/AMBARI-8458






From: Pratik Gadiya 
mailto:pratik_gad...@persistent.com>>
Sent: Sunday, May 17, 2015 1:57 AM
To: user@ambari.apache.org
Subject: Adding Hosts to Existing Cluster | Ambari 1.7.0

Hi All,

I want to add hosts to the existing hadoop cluster which is deployed via ambari 
rest api's.

For the same, I am referring to link 
https://cwiki.apache.org/confluence/display/AMBARI/Add+a+host+and+deploy+components+using+APIs

In the above link, we can observe that we have to make POST REST calls to 
install the services on the newly added hosts.
Here the number of such REST calls would be equivalent to number of services 
which we want to install (like shown as below )

[cid:image001.png@01D090C0.EDC90840]

I am wondering, if there is any way by which I can install this many number of 
services such as DATANODE, GANGLIA_MONITOR, NODEMANAGER etc.. in a single REST 
call to the newly added hosts.

Explaining with a small example of that REST call body is much appreciated.

Thanks,
Pratik


DISCLAIMER == This e-mail may contain privileged and confidential 
information which is the property of Persistent Systems Ltd. It is intended 
only for the use of the individual or entity to which it is addressed. If you 
are not the intended recipient, you are not authorized to read, retain, copy, 
print, distribute or use this message. If you have received this communication 
in error, please notify the sender and delete all copies of this message. 
Persistent Systems Ltd. does not accept any liability for virus infected mails.

DISCLAIMER == This e-mail may contain privileged and confidential 
information which is the property of Persistent Systems Ltd. It is intended 
only for the use of the individual or entity to which it is addressed. If you 
are not the intended recipient, you are not authorized to read, retain, copy, 
print, distribute or use this message. If you have received this communication 
in error, please notify the sender and delete all copies of this message. 
Persistent Systems Ltd. does not accept any liability for virus infected mails.


Re: Move History Server

2015-05-14 Thread Yusaku Sako
Hi,

Unfortunately, History Server cannot be moved from the UI at the moment.
However you can do the following:

1. Take a backup of your Ambari database
2. Stop History Server from the UI and wait until it is stopped.
3. Delete History Server from the source host via the API:
curl -i -u admin:admin -H 'X-Requested-By: ambari' -X DELETE
http://AMBARI_SERVER_HOST:8080/api/v1/clusters/CLUSTERNAME/hosts/HOSTNAME/h
ost_components/HISTORYSERVER
4. Add History Server to the target host via the API:
curl -i -u admin:admin -H 'X-Requested-By: ambari' -X POST -d
'{"host_components" : [{"HostRoles":{"component_name":"HISTORYSERVER"}}]
}' 
http://AMBARI_SERVER_HOST:8080/api/v1/clusters/CLUSTER_NAME/hosts/TARGET_HO
STNAME

5. Go to the UI to install and start History Server.

I hope this helps.
Yusaku


On 5/13/15 8:29 AM, "João Alves"  wrote:

>Hey all,
>
>I have a cluster with HDP 2.1.7 stack which I recently upgraded from
>Ambari 1.6.1 to 2.0.0. I have a computer in my cluster that I need to
>decommission. 
>
>I have been moving away all the master components from this machine. The
>last remaining is the History Server however it seems that there is no
>option to move it.
>
>I searched around and did not found any resources on how to do this.
>Could someone give me any pointers on how to move this component to
>another machine?
>
>Thanks for your help,
>João Alves



Re: Moving hadoop services

2015-04-24 Thread Yusaku Sako
As of Ambari 2.0:
HDFS: NameNode, Secondary NameNode, DataNodes can be moved.  JournalNode cannot 
be moved yet in case NameNode HA is enabled.
YARN: ResourceManager, App Timeline Server, NodeManagers can be moved.
HBase: HBase Master can be moved (but delete must be done explicitly via API as 
described).  RegionServers can be moved.

Yusaku

From: Pratik Gadiya 
mailto:pratik_gad...@persistent.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Friday, April 24, 2015 11:27 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: RE: Moving hadoop services

Thanks Yusaku for the detailed information.

So, this means that all the hadoop services can be moved to other hosts if  I 
try to add new hosts to the existing cluster ?

Please let me know if any additional information is reqired.

Thanks and Regards,
Pratik Gadiya

From: Yusaku Sako [yus...@hortonworks.com<mailto:yus...@hortonworks.com>]
Sent: Friday, April 24, 2015 8:26 PM
To: user@ambari.apache.org<mailto:user@ambari.apache.org>
Subject: Re: Moving hadoop services

Pratik,

You can move your NameNode via the UI.
Go to Services > HDFS > Service Actions.  You should see an option to move 
NameNode there.

As for HBase Master, it works a little differently.
You can add an arbitrary number of HBase Masters.  You add HBase Master to 
another host, and then you can remove it from the original host (go to the Host 
page where the HBase Master is, and select "Delete" from the Actions) to 
effectively move.
If you are using Ambari 2.0, then there is a bug where the delete option for 
the HBase Master does not show up.
To workaround this, you can remove it via the API:

curl -u admin:admin -H 'X-Requested-By: ambari' -X DELETE 
http://{AMBARI_SERVER_HOST}:8080/api/v1/clusters/{CLUSTERNAME}/hosts/{HOSTNAME}/host_components/HBASE_MASTER

I hope this helps.
Yusaku

From: Pratik Gadiya 
mailto:pratik_gad...@persistent.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Friday, April 24, 2015 5:40 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Moving hadoop services

Hi All,

I have deployed a hadoop cluster and just wondering if I could move few of the 
services running on one node to other node via REST API or Ambari UI.
For example services like HBaseMaster or Namenode etc..

I can observe that there is only option to move Secondary NameNode.

Please let me know if there is any way from which we can move services to the 
other hosts.

Thanks,
Pratik Gadiya

DISCLAIMER == This e-mail may contain privileged and confidential 
information which is the property of Persistent Systems Ltd. It is intended 
only for the use of the individual or entity to which it is addressed. If you 
are not the intended recipient, you are not authorized to read, retain, copy, 
print, distribute or use this message. If you have received this communication 
in error, please notify the sender and delete all copies of this message. 
Persistent Systems Ltd. does not accept any liability for virus infected mails.

DISCLAIMER == This e-mail may contain privileged and confidential 
information which is the property of Persistent Systems Ltd. It is intended 
only for the use of the individual or entity to which it is addressed. If you 
are not the intended recipient, you are not authorized to read, retain, copy, 
print, distribute or use this message. If you have received this communication 
in error, please notify the sender and delete all copies of this message. 
Persistent Systems Ltd. does not accept any liability for virus infected mails.


Re: Ambari Cluster Monitoring

2015-04-19 Thread Yusaku Sako
Hi Pratik,

> I have provisioned a cluster using Ambari Rest API's and I want to monitor 
> the progress of the cluster deployment.

Are you using the Blueprint API?
If so, you can poll on the request ID href that is returned from the Blueprint 
cluster deployment call and check on the progress in terms of percentage (and 
whether it completed successfully or not).

https://cwiki.apache.org/confluence/display/AMBARI/Blueprints
The response shown at the bottom of the page is the href I mentioned to check 
on the deployment status.

Yusaku

From: Pratik Gadiya 
mailto:pratik_gad...@persistent.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Sunday, April 19, 2015 10:38 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Ambari Cluster Monitoring


Hi All,

I have provisioned a cluster using Ambari Rest API's and I want to monitor the 
progress of the cluster deployment.
Basically, I want  to check if the deployment was successful and all the 
services are up and running.

One way could be to check the state of the services, if all the services are in 
STARTED state then we can confirm that the deployment is COMPLETED and 
successful.

$ curl -X GET -H "X-Requested-By: ambari" -u admin:admin 
https:///api/v1/clusters//services/?fields=ServiceInfo/state

Please let me know if there is any other way to check the same.

NOTE: Instead of pointing to a general link which describes all the REST calls 
for Ambari, I would appreciate if someone could clarify if there is any other 
approach with an example or command

Regards,
Pratik Gadiya

DISCLAIMER == This e-mail may contain privileged and confidential 
information which is the property of Persistent Systems Ltd. It is intended 
only for the use of the individual or entity to which it is addressed. If you 
are not the intended recipient, you are not authorized to read, retain, copy, 
print, distribute or use this message. If you have received this communication 
in error, please notify the sender and delete all copies of this message. 
Persistent Systems Ltd. does not accept any liability for virus infected mails.


Re: Change hostname on running cluster

2015-04-18 Thread Yusaku Sako
Just FYI...
What I've seen folks do is dump the database, keep a backup, replace all
occurrences of the old hostname to the new hostname in the dump file, then
reimport.

Yusaku

On 4/18/15 9:51 AM, "Sumit Mohanty"  wrote:

>+Alejandro
>
>In theory, you can stop ambari-server, modify all occurrences of the
>hostname and that should be it. There is not first class support for it.
>
>Alejandro, did you look at the possibility of manually changing all host
>names to rename a host
>(https://issues.apache.org/jira/browse/AMBARI-10167)
>
>-Sumit
>
>From: Frank Eisenhauer 
>Sent: Saturday, April 18, 2015 12:31 AM
>To: Ambari User
>Subject: Change hostname on running cluster
>
>Hi All,
>
>we have a running hadoop cluster where we unfortunately have a hostname
>in uppercase, e.g. SRV-HADOOP01.BIGDATA.LOCAL.
>
>As of Ambari 1.7 we are experiencing a lot of side effects which are
>presumably caused by the hostnames in uppercase.
>
>I would like to rename the particular hosts(e.g.
>srv-hadoop01.bigdata.local), so that there are only hosts with lowercase
>names in the cluster.
>
>Is it possible to change the hostname? I came across a few blogs, but in
>general renaming hostnames seems not to be recommended.
>
>Has anyone performed a hostname change?
>
>Many thanks in advance.



Re: delete using API problem

2015-04-17 Thread Yusaku Sako
Wow, this is bizarre.
Artem, do you see anything in ambari-server.log corresponding to the GET 
http://localhost:8080/api/v1/clusters/c1/services/STORM call?

Yusaku

From: Sumit Mohanty mailto:smoha...@hortonworks.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Friday, April 17, 2015 9:07 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Re: delete using API problem


​That error is something I am not familiar with. Perhaps someone else can chime 
in.


From: dbis...@gmail.com 
mailto:dbis...@gmail.com>> on behalf of Artem Ervits 
mailto:artemerv...@gmail.com>>
Sent: Friday, April 17, 2015 8:24 AM
To: user@ambari.apache.org
Subject: Re: delete using API problem

I think the anwer lies in last line "couldn't resolve host ''. How do I go 
about this?

{
  "href" : "http://localhost:8080/api/v1/clusters/c1/services/STORM";,
  "ServiceInfo" : {
"cluster_name" : "c1",
"maintenance_state" : "ON",
"service_name" : "STORM",
"state" : "UNKNOWN"
  },
  "alerts_summary" : {
"CRITICAL" : 0,
"MAINTENANCE" : 1,
"OK" : 0,
"UNKNOWN" : 0,
"WARNING" : 0
  },
  "alerts" : [
{
  "href" : 
"http://localhost:8080/api/v1/clusters/c1/services/STORM/alerts/12";,
  "Alert" : {
"cluster_name" : "c1",
"definition_id" : 22,
"definition_name" : "storm_supervisor_process_percent",
"host_name" : null,
"id" : 12,
"service_name" : "STORM"
  }
}
  ],
  "components" : [
{
  "href" : 
"http://localhost:8080/api/v1/clusters/c1/services/STORM/components/DRPC_SERVER";,
  "ServiceComponentInfo" : {
"cluster_name" : "c1",
"component_name" : "DRPC_SERVER",
"service_name" : "STORM"
  }
},
{
  "href" : 
"http://localhost:8080/api/v1/clusters/c1/services/STORM/components/NIMBUS";,
  "ServiceComponentInfo" : {
"cluster_name" : "c1",
"component_name" : "NIMBUS",
"service_name" : "STORM"
  }
},
{
  "href" : 
"http://localhost:8080/api/v1/clusters/c1/services/STORM/components/STORM_UI_SERVER";,
  "ServiceComponentInfo" : {
"cluster_name" : "c1",
"component_name" : "STORM_UI_SERVER",
"service_name" : "STORM"
  }
},
{
  "href" : 
"http://localhost:8080/api/v1/clusters/c1/services/STORM/components/SUPERVISOR";,
  "ServiceComponentInfo" : {
"cluster_name" : "c1",
"component_name" : "SUPERVISOR",
"service_name" : "STORM"
  }
}
  ],
  "artifacts" : [ ]
curl: (6) Couldn't resolve host '​'



Re: Ambari 1.7.0

2015-04-16 Thread Yusaku Sako
HDP 2.2 contains Ranger.  However, Ambari 1.7.0 does not support installation 
and management of Ranger.
The ability to install and manage Ranger was added in Ambari 2.0.0.
On a related note, if the user installed Ranger manually on a cluster managed 
by Ambari 1.7.0, then when you upgrade to Ambari 2.0.0,
there are some manual steps you must do so that Ambari can manage Ranger, I 
believe.

There's no plan for Ambari to support Hue.

Yusaku

From: Pratik Gadiya 
mailto:pratik_gad...@persistent.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Thursday, April 16, 2015 5:29 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Ambari 1.7.0

Hi All,

Ambari 1.7.0 uses HDP 2.2 and when I deploy the cluster via Ambari UI, I am not 
able to see Ranger and Hue services in the list of the deployed services.
HDP 2.2 documentation says that Ranger and Hue are provided with the package.

However, I am not sure why it's not getting installed.
Do I have to perform some additional steps to deploy this services ?

Please let me know if you have any inputs on this.

Thanks,
Pratik Gadiya


DISCLAIMER == This e-mail may contain privileged and confidential 
information which is the property of Persistent Systems Ltd. It is intended 
only for the use of the individual or entity to which it is addressed. If you 
are not the intended recipient, you are not authorized to read, retain, copy, 
print, distribute or use this message. If you have received this communication 
in error, please notify the sender and delete all copies of this message. 
Persistent Systems Ltd. does not accept any liability for virus infected mails.


Re: ambari-agent (ver. 2.0.0.0) build failer

2015-04-15 Thread Yusaku Sako
CentOS 6.x come with Python 2.6.y and that using 2.7 could cause problems.
Do you have any more output from the Ambari Agent build process?

Yusaku


From: jongchul seon mailto:jongchul.s...@gmail.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Monday, April 13, 2015 2:10 PM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: ambari-agent (ver. 2.0.0.0) build failer

Hi

I'm trying to build ambari 2.0.

I've got below error message..

My build envrionment is as below

maven version ; 3.0.5
python : 2.7.8
os : centos 6.5
java : open jdk 1.7

Do I miss something?


[INFO] Ambari Main ... SUCCESS [9.723s]
[INFO] Apache Ambari Project POM . SUCCESS [0.347s]
[INFO] Ambari Web  SUCCESS [6:02.729s]
[INFO] Ambari Views .. SUCCESS [3.203s]
[INFO] Ambari Admin View . SUCCESS [48.014s]
[INFO] Ambari Metrics Common . SUCCESS [3.494s]
[INFO] Ambari Server . SUCCESS [4:21.089s]
[INFO] Ambari Agent .. FAILURE [2.536s]
[INFO] Ambari Client . SKIPPED
[INFO] Ambari Python Client .. SKIPPED
[INFO] Ambari Groovy Client .. SKIPPED
[INFO] Ambari Shell .. SKIPPED
[INFO] Ambari Python Shell ... SKIPPED
[INFO] Ambari Groovy Shell ... SKIPPED
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 11:32.204s
[INFO] Finished at: Sun Apr 12 18:45:48 EDT 2015
[INFO] Final Memory: 82M/811M
[INFO] 
[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.2:exec 
(python-package) on project ambari-agent: Command execution failed. Process 
exited with an error: 1(Exit value: 1) -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command



Re: Start/stop all services programmatically

2015-04-08 Thread Yusaku Sako
Sorry, forgot to answer your second question regarding dependencies.
Such dependencies are specified in a file called role_command_order.json as 
part of the stack defintion.

https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/role_command_order.json

If you try to start/stop all services in bulk, the command order rules will be 
followed automatically by the server.

Yusaku

From: Yusaku Sako mailto:yus...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Wednesday, April 8, 2015 5:27 PM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Re: Start/stop all services programmatically

Hi Krzysztof,

You can do everything that the UI does with the API.
The best way to learn what API calls the UI is making is to use the browser's 
developer tool and watch the network traffic.

Stop all services:
curl -i -uadmin:admin -H "X-Requested-By: ambari" -X PUT -d '
{"RequestInfo":{"context":"Stop all 
services","operation_level":{"level":"CLUSTER","cluster_name":"ing_hdp"}},"Body":{"ServiceInfo":{"state":"INSTALLED"}}}
' http://ambari:8080/api/v1/clusters/ing_hdp/services

Start all services:
curl -i -uadmin:admin -H "X-Requested-By: ambari" -X PUT -d '
{"RequestInfo":{"context":"Start all 
services","operation_level":{"level":"CLUSTER","cluster_name":"ing_hdp"}},"Body":{"ServiceInfo":{"state":"STARTED"}}}
' http://ambari:8080/api/v1/clusters/ing_hdp/services

I hope this helps.
Yusaku

From: Krzysztof Adamski 
mailto:adamskikrzys...@gmail.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Tuesday, April 7, 2015 12:15 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Start/stop all services programmatically

Hello,

I am busy implementing a manual job to stop all hosts services via script 
before rebooting the OS. The examples I found in wiki are per service or 
component.

1. Is there any way to invoke the stop/start for all hosts components just like 
from a web interface?
2. How ambari determines the proper order for the services to start/stop e.g. 
first stop hiveserver before stopping mysql etc.

curl -s --user admin:admin -H "X-Requested-By: ambari" -X GET 
"http://ambari:8080/api/v1/clusters/ing_hdp/components/?ServiceComponentInfo/category.in(SLAVE,MASTER)&host_components/HostRoles/host_name=host1&fields=host_components/HostRoles/component_name,host_components/HostRoles/state"
 | jq -r '[[.items[].host_components[].HostRoles.component_name]]|tostring' | 
sed -r 's/[\["]//g' | sed -r 's/[]]//g'
  function stop(){
curl -u admin:admin -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": 
{"context" :"Stop '"$1"' via REST"}, "Body": {"HostRoles": {"state": 
"INSTALLED"}}}' 
http://ambari:8080/api/v1/clusters/ing_hdp/hosts/host1/host_components/$1
}
Thanks for any guide.


Re: Start/stop all services programmatically

2015-04-08 Thread Yusaku Sako
Hi Krzysztof,

You can do everything that the UI does with the API.
The best way to learn what API calls the UI is making is to use the browser's 
developer tool and watch the network traffic.

Stop all services:
curl -i -uadmin:admin -H "X-Requested-By: ambari" -X PUT -d '
{"RequestInfo":{"context":"Stop all 
services","operation_level":{"level":"CLUSTER","cluster_name":"ing_hdp"}},"Body":{"ServiceInfo":{"state":"INSTALLED"}}}
' http://ambari:8080/api/v1/clusters/ing_hdp/services

Start all services:
curl -i -uadmin:admin -H "X-Requested-By: ambari" -X PUT -d '
{"RequestInfo":{"context":"Start all 
services","operation_level":{"level":"CLUSTER","cluster_name":"ing_hdp"}},"Body":{"ServiceInfo":{"state":"STARTED"}}}
' http://ambari:8080/api/v1/clusters/ing_hdp/services

I hope this helps.
Yusaku

From: Krzysztof Adamski 
mailto:adamskikrzys...@gmail.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Tuesday, April 7, 2015 12:15 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Start/stop all services programmatically

Hello,

I am busy implementing a manual job to stop all hosts services via script 
before rebooting the OS. The examples I found in wiki are per service or 
component.

1. Is there any way to invoke the stop/start for all hosts components just like 
from a web interface?
2. How ambari determines the proper order for the services to start/stop e.g. 
first stop hiveserver before stopping mysql etc.

curl -s --user admin:admin -H "X-Requested-By: ambari" -X GET 
"http://ambari:8080/api/v1/clusters/ing_hdp/components/?ServiceComponentInfo/category.in(SLAVE,MASTER)&host_components/HostRoles/host_name=host1&fields=host_components/HostRoles/component_name,host_components/HostRoles/state"
 | jq -r '[[.items[].host_components[].HostRoles.component_name]]|tostring' | 
sed -r 's/[\["]//g' | sed -r 's/[]]//g'
  function stop(){
curl -u admin:admin -i -H 'X-Requested-By: ambari' -X PUT -d '{"RequestInfo": 
{"context" :"Stop '"$1"' via REST"}, "Body": {"HostRoles": {"state": 
"INSTALLED"}}}' 
http://ambari:8080/api/v1/clusters/ing_hdp/hosts/host1/host_components/$1
}
Thanks for any guide.


Re: Locating services using Ambari APIs

2015-04-02 Thread Yusaku Sako
Hi Umar,

You can make an API call like this:
GET 
/api/v1/clusters/cluster_name/host_components?HostRoles/component_name=SLIDER_CLIENT

Yusaku

From: Um ar mailto:truthisoutwh...@gmail.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Thursday, April 2, 2015 5:39 PM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Locating services using Ambari APIs

Hi,

I am looking for a way to find out which service is running where on my cluster 
programmatically using Ambari APIs. For example, how to locate where 'Slider 
Client' is installed so that any slider commands can be redirected to that node.

Thanks,
-Umar


Re: Ambari Views Error

2015-03-23 Thread Yusaku Sako
Hi John,

Perhaps you can have the link within your View to launch the Job History Server 
page in a new tab (I.e., target="_blank")?

Yusaku

From: "John.Bork" mailto:john.b...@target.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Tuesday, March 24, 2015 6:20 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Ambari Views Error

Hi, I am developing a Ambari View in which one component of it is to provide 
links to jobs on the Job History Server. When the link is clicked, the iframe 
that held the view now goes to the Job History Server and throws the following 
error in the browser console.

Uncaught SecurityError: Blocked a frame with origin  from 
accessing a frame with origin  Protocols, domains, and 
ports must match. step9_view.js:1
App.MainViewsDetailsView.Em.View.extend.resizeFunction step9_view.js:1
(anonymous function)

The link is inserted into a bootstrap tblflat element row from which it can be 
clicked.

Also, after the link is clicked and the iframe opens the Job History Server, 
the iFrame height attribute is set to auto which causes the height to shrink 
between 100 and 200 pixels. Is this the correct action, or should the iframe be 
prevented from following the link in the first place? What is the expected 
behavior?


- John Bork




Re: cannot add hosts to an HDP 2.1.2.1 cluster

2015-03-22 Thread Yusaku Sako
Hi Brian,

> I've managed to reconfigure it so that the baseUrl
being used for the HDP repo is 
http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.1.2.0.

Did you make this change directly on the file system or through the Ambari API?
Ambari will overwrite the URLs in the HDP repo file while adding hosts if you 
modified the repo files directly on the file system.

Yusaku


From: Brian Jeltema 
mailto:brian.jelt...@digitalenvoy.net>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Wednesday, March 18, 2015 10:43 PM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Re: cannot add hosts to an HDP 2.1.2.1 cluster

I'm still fighting this problem, though I've made some progress.

I'm running Ambari 1.6.0. I've managed to reconfigure it so that the baseUrl
being used for the HDP repo is 
http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.1.2.0.
However, when I add a new host to the cluster, it is trying to install the 
latest 2.1.7.0 package,
which is not present in that repo. How can I force the new host to use the 
2.1.2.0 bits?

Thanks.
Brian


On Mar 16, 2015, at 11:07 AM, Brian Jeltema 
mailto:brian.jelt...@digitalenvoy.net>> wrote:

I have an existing cluster running HDP 2.1.2.1 When I tried to add hosts to 
that cluster, the
install fails:

Error: Cannot retrieve repository metadata (repomd.xml) for repository: 
HDP-2.1. Please verify its path and try again

It's attempting to download the repo from

   http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.1.2.1/hdp.repo

There is a repo available for 2.1.2.0. Was the repo for 2.1.2.1 intentionally 
removed?
What is the cleanest way to fix this?

Brian



Re: Couple of question.

2015-03-22 Thread Yusaku Sako
Hi Angel,

Thanks for your kind words.

> Is true that It is not possible to use Ambari with an existing Hadoop 
> installation? So, it is necessary to install all the platform using Ambari.
This could be a problem for our platform team.

That is true.
Currently, you cannot directly manage and monitor an existing Hadoop cluster 
that was installed outside of Ambari.
However, some folks have successfully performed "take over" procedures to bring 
such existing clusters under Ambari's management.
This involves a bit of surgery such as using Ambari's Install Wizard on the 
existing cluster but using fake directories for DataNodes (and others), and 
switching the directories to the actual ones after Install Wizard is done, etc. 
 This is not very straightforward and can get you in trouble if you are not 
careful, but it is an option.

Yusaku


From: Angel Cervera Claudio 
mailto:angelcerv...@silyan.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Monday, March 23, 2015 5:37 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Re: Couple of question.

Hi Yusaku.
Awesome community support!
With people like you, I am sure that Ambari will be the best monitor tool!

My last question (I don't want bored you :) ) :

Is true that It is not possible to use Ambari with an existing Hadoop 
installation? So, it is necessary to install all the platform using Ambari.
This could be a problem for our platform team.

Regards.


2015-03-21 1:01 GMT+00:00 Yusaku Sako 
mailto:yus...@hortonworks.com>>:
Hi Angel,

> Ganglia is going to be replace by Ambari Metric System. Will be the same for 
> Nagios?

Yes, Ambari will ship with its own alerting system in 2.0.  Nagios is no longer 
supported.

> If we can not use Ambari, we are thinking in develop an adhoc application for 
> metering and monitoring. We are thinking in use Time Serie Databases to store 
> metric, like Druid or OpenTSDB. Is Amabari going to use any databases of this 
> type?

Ambari's Metric System (AMS) uses HBase as its implementation, like OpenTSDB.  
In addition, AMS uses Phoenix to support SQL queries.
I believe the storage layer for AMS was designed so that it can be swapped out, 
but I will let others who are more familiar comment on that.

> And the question that maybe you are tired to listen. :) Do you know when is 
> planed to release the version 2.0.0?

A release vote for 2.0 just went out.  If everything goes well, 2.0 will ship 
within days.

Thanks,
Yusaku


From: Angel Cervera Claudio 
mailto:angelcerv...@silyan.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Friday, March 20, 2015 6:56 AM

To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Re: Couple of question.

Hi Yusaku.
Thank you for your response.

Other three question:

  *   Ganglia is going to be replace by Ambari Metric System. Will be the same 
for Nagios?
  *   If we can not use Ambari, we are thinking in develop an adhoc application 
for metering and monitoring. We are thinking in use Time Serie Databases to 
store metric, like Druid or OpenTSDB. Is Amabari going to use any databases of 
this type?
  *   And the question that maybe you are tired to listen. :) Do you know when 
is planed to release the version 2.0.0?

Regards and thanks for your time.


2015-03-19 6:42 GMT+00:00 Yusaku Sako 
mailto:yus...@hortonworks.com>>:
Hi Angel,

On Ambari 1.7.0 and earlier, Ganglia is used to capture information about 
memory usage, network usage, and other system metrics, as well as 
Hadoop-specific metrics.
Ambari 2.0.0 and onwards, Ambari Metric System will replace Ganglia as the 
underlying metrics collection framework.
In either system, you can tune sampling/aggregation frequencies and retention 
policies so that you can control how long the collected samples/aggregated 
values are kept at what granularity.

Yusaku

From: Angel Cervera Claudio 
mailto:angelcerv...@silyan.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Thursday, March 19, 2015 4:10 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Re: Couple of question.

Hi Yasuku.
Thanks for your responses.

Respect Historical data: One of the requirements that we have is to be able to 
show/compare historical information.
For example, it is necessary to be able to compare the memory used yesterday 
(when we executed the new version of a heavy MapReduce process) with the memory 
consumed two month ago.
This is only one example.

Regards.


2015-03-18 6:35 GMT+00:00 Yusaku Sako 

Re: Couple of question.

2015-03-20 Thread Yusaku Sako
Hi Angel,

> Ganglia is going to be replace by Ambari Metric System. Will be the same for 
> Nagios?

Yes, Ambari will ship with its own alerting system in 2.0.  Nagios is no longer 
supported.

> If we can not use Ambari, we are thinking in develop an adhoc application for 
> metering and monitoring. We are thinking in use Time Serie Databases to store 
> metric, like Druid or OpenTSDB. Is Amabari going to use any databases of this 
> type?

Ambari's Metric System (AMS) uses HBase as its implementation, like OpenTSDB.  
In addition, AMS uses Phoenix to support SQL queries.
I believe the storage layer for AMS was designed so that it can be swapped out, 
but I will let others who are more familiar comment on that.

> And the question that maybe you are tired to listen. :) Do you know when is 
> planed to release the version 2.0.0?

A release vote for 2.0 just went out.  If everything goes well, 2.0 will ship 
within days.

Thanks,
Yusaku


From: Angel Cervera Claudio 
mailto:angelcerv...@silyan.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Friday, March 20, 2015 6:56 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Re: Couple of question.

Hi Yusaku.
Thank you for your response.

Other three question:

  *   Ganglia is going to be replace by Ambari Metric System. Will be the same 
for Nagios?
  *   If we can not use Ambari, we are thinking in develop an adhoc application 
for metering and monitoring. We are thinking in use Time Serie Databases to 
store metric, like Druid or OpenTSDB. Is Amabari going to use any databases of 
this type?
  *   And the question that maybe you are tired to listen. :) Do you know when 
is planed to release the version 2.0.0?

Regards and thanks for your time.


2015-03-19 6:42 GMT+00:00 Yusaku Sako 
mailto:yus...@hortonworks.com>>:
Hi Angel,

On Ambari 1.7.0 and earlier, Ganglia is used to capture information about 
memory usage, network usage, and other system metrics, as well as 
Hadoop-specific metrics.
Ambari 2.0.0 and onwards, Ambari Metric System will replace Ganglia as the 
underlying metrics collection framework.
In either system, you can tune sampling/aggregation frequencies and retention 
policies so that you can control how long the collected samples/aggregated 
values are kept at what granularity.

Yusaku

From: Angel Cervera Claudio 
mailto:angelcerv...@silyan.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Thursday, March 19, 2015 4:10 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Re: Couple of question.

Hi Yasuku.
Thanks for your responses.

Respect Historical data: One of the requirements that we have is to be able to 
show/compare historical information.
For example, it is necessary to be able to compare the memory used yesterday 
(when we executed the new version of a heavy MapReduce process) with the memory 
consumed two month ago.
This is only one example.

Regards.


2015-03-18 6:35 GMT+00:00 Yusaku Sako 
mailto:yus...@hortonworks.com>>:
Hi Angel,

Please see my response inline below:

From: Angel Cervera Claudio 
mailto:angelcerv...@silyan.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Friday, March 6, 2015 12:50 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Couple of question.

Hi everybody.

I am looking for the way of custom Ambari with new widgets and views.
Researching for few days (I never worked before with this tool), I have a 
couple of questions:

  *   Is there any easy way to create new widget to add in the main page of 
Ambari?

  *   Is there any possibility to have a different layout (different widgets 
and views) per user. Something like profiles.

  *   Is there possible to secure by roles and widgets/views/services 
(authorization)?

[YS] These will be covered by new features being worked on to make the 
dashboards customizable via https://issues.apache.org/jira/browse/AMBARI-9792.
You will be able to define a default layout, as well as customized layouts per 
user, create new widgets based on different metrics, share those widgets, etc.  
Views (and their features) can be authorized per the Ambari Views framework 
already.

  *   I did not found anything about history data (real historical data, per 
years). Is it possible?

[YS] Can you clarify what you mean?  Do you mean various time-series data 
emitted by the system/various components?

I downloaded the source code for 1.7 (github tag 1.7), but in the pom.xml said 
that it is the version 1.3.0-SNAPSHOT. Curiosly, there are not re

Re: Does Ambari support multiple language now?

2015-03-20 Thread Yusaku Sako
That's correct.
Ambari does not support i18n yet, though there are developers in the community 
interested in providing i18n.

Thanks,
Yusaku

From: Dashuang He mailto:he...@lenovo.com>>
Reply-To: "d...@ambari.apache.org" 
mailto:d...@ambari.apache.org>>
Date: Friday, March 20, 2015 4:55 PM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>, 
"d...@ambari.apache.org" 
mailto:d...@ambari.apache.org>>
Subject: Does Ambari support multiple language now?

All,

Anyone knows whether Amabri have plan to support multiple language now? I 
search Ambari source code, it is not enabled for multiple language support yet. 
thanks.



Vincent (Dashuang) He
China Mendocino Team Overall Leader
e: he...@lenovo.com
c: (21)20590065
voip:6210065
mobile: 18116117580
7F, Buliding A, 560 Songtao road, Shanghai, China
OneTeam with OneDream, to delight our customers!

[Description: image001]



Re: Couple of question.

2015-03-18 Thread Yusaku Sako
Hi Angel,

On Ambari 1.7.0 and earlier, Ganglia is used to capture information about 
memory usage, network usage, and other system metrics, as well as 
Hadoop-specific metrics.
Ambari 2.0.0 and onwards, Ambari Metric System will replace Ganglia as the 
underlying metrics collection framework.
In either system, you can tune sampling/aggregation frequencies and retention 
policies so that you can control how long the collected samples/aggregated 
values are kept at what granularity.

Yusaku

From: Angel Cervera Claudio 
mailto:angelcerv...@silyan.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Thursday, March 19, 2015 4:10 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Re: Couple of question.

Hi Yasuku.
Thanks for your responses.

Respect Historical data: One of the requirements that we have is to be able to 
show/compare historical information.
For example, it is necessary to be able to compare the memory used yesterday 
(when we executed the new version of a heavy MapReduce process) with the memory 
consumed two month ago.
This is only one example.

Regards.


2015-03-18 6:35 GMT+00:00 Yusaku Sako 
mailto:yus...@hortonworks.com>>:
Hi Angel,

Please see my response inline below:

From: Angel Cervera Claudio 
mailto:angelcerv...@silyan.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Friday, March 6, 2015 12:50 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Couple of question.

Hi everybody.

I am looking for the way of custom Ambari with new widgets and views.
Researching for few days (I never worked before with this tool), I have a 
couple of questions:

  *   Is there any easy way to create new widget to add in the main page of 
Ambari?

  *   Is there any possibility to have a different layout (different widgets 
and views) per user. Something like profiles.

  *   Is there possible to secure by roles and widgets/views/services 
(authorization)?

[YS] These will be covered by new features being worked on to make the 
dashboards customizable via https://issues.apache.org/jira/browse/AMBARI-9792.
You will be able to define a default layout, as well as customized layouts per 
user, create new widgets based on different metrics, share those widgets, etc.  
Views (and their features) can be authorized per the Ambari Views framework 
already.

  *   I did not found anything about history data (real historical data, per 
years). Is it possible?

[YS] Can you clarify what you mean?  Do you mean various time-series data 
emitted by the system/various components?

I downloaded the source code for 1.7 (github tag 1.7), but in the pom.xml said 
that it is the version 1.3.0-SNAPSHOT. Curiosly, there are not release 1.3 
mention in the ambari.apache.org<http://ambari.apache.org> page. Why?

[YS] That's just an artifact of the source code in Ambari 1.7.0.  If you want 
to rebuild, you can issue "mvn versions:set -DnewVersion=1.7.0.0" to set the 
version in the pom.xml files.  You are correct that there never was a 1.3 
release.

Regards.






Re: Hi

2015-03-18 Thread Yusaku Sako
Try:

curl -u uname:pwd 
url_of_my_cluster/api/v1/clusters/my_cluster_name?format=blueprint

Yusaku

From: bigdata hadoop mailto:hadoopst...@gmail.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Thursday, March 19, 2015 8:09 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Hi

Hi All

I am trying to create a blueprint for an existing cluster which does not have a 
blueprint, so is there any way to create it directly?

I tried the following but it doesn't work.
curl -u uname:pwd -H "X-Requested-By: ambari" -X GET 
url_of_my_cluster/api/v1/blueprints

Thanks for your help. I appreciate it.

Thanks
Sowjanya


Re: Did something get broken for webhcat today?

2015-03-18 Thread Yusaku Sako
Greg,

Ambari does automatically retrieve the repo info for the latest maintenance 
version of the stack.
For example, if you select "HDP 2.2", it will pull the latest HDP 2.2.x version.
It seems like HDP 2.2.3 was released last night, so when you are installing a 
new cluster it is trying to install with 2.2.3.
Since you already have HDP 2.2.0 bits pre-installed on your image, you need to 
explicitly set the repo URL to 2.2.0 bits in the Select Stack page, as Jeff 
mentioned.

This is only true for new clusters being installed.
For adding hosts to existing clusters, it will continue to use the repo URL 
that you originally used to install the cluster with.

Yusaku

From: Greg Hill mailto:greg.h...@rackspace.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Thursday, March 19, 2015 1:56 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Re: Did something get broken for webhcat today?

We did install that repo when we built the images we're using:

wget -O /etc/yum.repos.d/hdp.repo 
http://public-repo-1.hortonworks.com/HDP/centos6/2.x/GA/2.2.0.0/hdp.repo

We preinstall a lot of packages on the images to reduce install time, including 
ambari.  So our version of Ambari didn't change, and we didn't inject those new 
repos.  Does ambari self-update or phone home to get the latest repos?  I can't 
figure out how the new repo got injected.

Greg


From: Jeff Sposetti mailto:j...@hortonworks.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 11:48 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: COMMERCIAL:Re: Did something get broken for webhcat today?


In Ambari Web > Admin > Stack (or during install, on Select Stack, expand 
Advanced Repository Options): can you update your HDP repo Base URL to use the 
HDP 2.2 GA repository (instead of what it's pulling, which is 2.2.3.0)?


http://public-repo-1.hortonworks.com/HDP/centos6/2.x/GA/2.2.0.0



From: Greg Hill mailto:greg.h...@rackspace.com>>
Sent: Wednesday, March 18, 2015 12:41 PM
To: user@ambari.apache.org
Subject: Re: Did something get broken for webhcat today?

We didn't change anything.  Ambari 1.7.0, HDP 2.2.  Repos are:

[root@gateway-1 ~]# cat /etc/yum.repos.d/HDP.repo
[HDP-2.2]
name=HDP
baseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.2.3.0
path=/
enabled=1
gpgcheck=0
[root@gateway-1 ~]# cat /etc/yum.repos.d/HDP-UTILS.repo
[HDP-UTILS-1.1.0.20]
name=HDP-UTILS
baseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos6
path=/
enabled=1
gpgcheck=0
[root@gateway-1 ~]# cat /etc/yum.repos.d/ambari.repo
[ambari-1.x]
name=Ambari 1.x
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/GA
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1

[Updates-ambari-1.7.0]
name=ambari-1.7.0 - Updates
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1



From: Jeff Sposetti mailto:j...@hortonworks.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 11:26 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: COMMERCIAL:Re: Did something get broken for webhcat today?

Are you using ambari trunk or ambari 2.0.0 branch builds?

Also please confirm: your HDP repos have not changed (I.e. Are you using local 
repos for the HDP stack packages)?

From: Greg Hill mailto:greg.h...@rackspace.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 12:22 PM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Did something get broken for webhcat today?

Starting this morning, we started seeing this on every single install.  I think 
someone at Hortonworks pushed out a broken RPM or something.  Any ideas?  This 
is rather urgent as we are no longer able to provision HDP 2.2 clusters at all 
because of it.


2015-03-18 15:58:05,982 - Group['hadoop'] {'ignore_failures': False}
2015-03-18 15:58:05,984 - Modifying group hadoop
2015-03-18 15:58:06,080 - Group['nobody'] {'ignore_failures': False}
2015-03-18 15:58:06,081 - Modifying group nobody
2015-03-18 15:58:06,219 - Group['users'] {'ignore_failures': False}
2015-03-18 15:58:06,220 - Modifying group users
2015-03-18 15:58:06,370 - Group['nagios'] {'ignore_failures': False}
2015-03-18 15:58:06,371 - Modifying group nagios
2015-03-18 15:58:06,474 - User['nobody'] {'gid': 'hadoop', 'ignore_fa

Re: Changing HDFS Log Directories in Ambari 1.7

2015-03-17 Thread Yusaku Sako
> However, you should proceed with caution since changing hdfs_log_dir_prefix 
> has not really been tested.

Let me clarify.  What I meant to say was that changing hdfs_log_dir_prefix 
*after the cluster has been installed* has not really been tested.

Yusaku


From: Yusaku Sako mailto:yus...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Wednesday, March 18, 2015 3:13 PM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Re: Changing HDFS Log Directories in Ambari 1.7

Hi Nishanth,

You can try changing "hdfs_log_dir_prefix" in "hadoop-env" via the script 
/var/lib/ambari-server/resources/scripts/configs.sh.
However, you should proceed with caution since changing hdfs_log_dir_prefix has 
not really been tested.

Yusaku

From: Nishanth S mailto:nishanth.2...@gmail.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Wednesday, March 18, 2015 1:16 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Re: Changing HDFS Log Directories in Ambari 1.7

Hello,

I am  very new to ambari  and  looking for a way to change the  hadoop log 
diretories.By default it is  pointing to /var/log.I would like to set this  to 
a custom mounted directory.I already tried  playing around with hdfs_log_dir 
and hdfs_log_dir_prefix and want to check if there is an option other than 
symlinks.I would really appreciate any help  in this regards.

Thanks,
Nishanth

On Mon, Mar 16, 2015 at 1:02 PM, Nishanth S 
mailto:nishanth.2...@gmail.com>> wrote:
Hello,

I am  very new to ambari  and  looking for a way to change the  hadoop log 
diretories.By default it is  pointing to /var/log.I would like to set this  to 
a custom mounted directory.I already tried  playing around with hdfs_log_dir 
and hdfs_log_dir_prefix and want to check if there is an option other than 
symlinks.I would really appreciate any help  in this regards.

Thanks,
Nishanth



Re: Couple of question.

2015-03-17 Thread Yusaku Sako
Hi Angel,

Please see my response inline below:

From: Angel Cervera Claudio 
mailto:angelcerv...@silyan.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Friday, March 6, 2015 12:50 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Couple of question.

Hi everybody.

I am looking for the way of custom Ambari with new widgets and views.
Researching for few days (I never worked before with this tool), I have a 
couple of questions:

  *   Is there any easy way to create new widget to add in the main page of 
Ambari?

  *   Is there any possibility to have a different layout (different widgets 
and views) per user. Something like profiles.

  *   Is there possible to secure by roles and widgets/views/services 
(authorization)?

[YS] These will be covered by new features being worked on to make the 
dashboards customizable via https://issues.apache.org/jira/browse/AMBARI-9792.
You will be able to define a default layout, as well as customized layouts per 
user, create new widgets based on different metrics, share those widgets, etc.  
Views (and their features) can be authorized per the Ambari Views framework 
already.

  *   I did not found anything about history data (real historical data, per 
years). Is it possible?

[YS] Can you clarify what you mean?  Do you mean various time-series data 
emitted by the system/various components?

I downloaded the source code for 1.7 (github tag 1.7), but in the pom.xml said 
that it is the version 1.3.0-SNAPSHOT. Curiosly, there are not release 1.3 
mention in the ambari.apache.org page. Why?

[YS] That's just an artifact of the source code in Ambari 1.7.0.  If you want 
to rebuild, you can issue "mvn versions:set -DnewVersion=1.7.0.0" to set the 
version in the pom.xml files.  You are correct that there never was a 1.3 
release.

Regards.





Re: Changing HDFS Log Directories in Ambari 1.7

2015-03-17 Thread Yusaku Sako
Hi Nishanth,

You can try changing "hdfs_log_dir_prefix" in "hadoop-env" via the script 
/var/lib/ambari-server/resources/scripts/configs.sh.
However, you should proceed with caution since changing hdfs_log_dir_prefix has 
not really been tested.

Yusaku

From: Nishanth S mailto:nishanth.2...@gmail.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Wednesday, March 18, 2015 1:16 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Re: Changing HDFS Log Directories in Ambari 1.7

Hello,

I am  very new to ambari  and  looking for a way to change the  hadoop log 
diretories.By default it is  pointing to /var/log.I would like to set this  to 
a custom mounted directory.I already tried  playing around with hdfs_log_dir 
and hdfs_log_dir_prefix and want to check if there is an option other than 
symlinks.I would really appreciate any help  in this regards.

Thanks,
Nishanth

On Mon, Mar 16, 2015 at 1:02 PM, Nishanth S 
mailto:nishanth.2...@gmail.com>> wrote:
Hello,

I am  very new to ambari  and  looking for a way to change the  hadoop log 
diretories.By default it is  pointing to /var/log.I would like to set this  to 
a custom mounted directory.I already tried  playing around with hdfs_log_dir 
and hdfs_log_dir_prefix and want to check if there is an option other than 
symlinks.I would really appreciate any help  in this regards.

Thanks,
Nishanth



Re: cannot add hosts to an HDP 2.1.2.1 cluster

2015-03-16 Thread Yusaku Sako
Hi Brian,

I've sent you a response directly to you as this is vendor specific.

Yusaku

From: Brian Jeltema 
mailto:brian.jelt...@digitalenvoy.net>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Tuesday, March 17, 2015 12:07 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: cannot add hosts to an HDP 2.1.2.1 cluster

I have an existing cluster running HDP 2.1.2.1 When I tried to add hosts to 
that cluster, the
install fails:

Error: Cannot retrieve repository metadata (repomd.xml) for repository: 
HDP-2.1. Please verify its path and try again

It's attempting to download the repo from

   http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.1.2.1/hdp.repo

There is a repo available for 2.1.2.0. Was the repo for 2.1.2.1 intentionally 
removed?
What is the cleanest way to fix this?

Brian


Re: COMMERCIAL:Re: decommission multiple nodes issue

2015-03-04 Thread Yusaku Sako
Greg,

You should be able to make the call as documented in the Wiki.
I've tested on Ambari 1.6.1 and Ambari 2.0.0 and the calls work fine for both 
DataNodes and NodeManagers, regardless of host maintenance mode.
Please let us know if you see issues.

Thanks,
Yusaku

From: Greg Hill mailto:greg.h...@rackspace.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Wednesday, March 4, 2015 6:26 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>, Sean Roberts 
mailto:srobe...@hortonworks.com>>
Subject: Re: COMMERCIAL:Re: decommission multiple nodes issue

IIRC switching it to HOST_COMPONENT made it so I couldn't pass in multiple 
hosts (that was what I was doing originally, and Ambari just rejected the 
request outright, unless my memory is tricking me).  Maybe I just needed 
slightly different syntax for that case?

Also, decommissioning NODEMANAGER using CLUSTER and a list of hosts did not 
exhibit the same behavior.  It seemed to decommission them properly, even when 
in maintenance mode.

Greg

From: Yusaku Sako mailto:yus...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Tuesday, March 3, 2015 at 9:41 PM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>, Sean Roberts 
mailto:srobe...@hortonworks.com>>
Subject: COMMERCIAL:Re: decommission multiple nodes issue

Hi Greg,

This is actually by design.
If you want to decommission all DataNodes regardless of their host maintenance 
mode, you need to change "RequestInfo/level" from "CLUSTER" to "HOST_COMPONENT".
When you set the "level" to "CLUSTER", bulk operations (in this case 
decommission) would be skipped on the matching target resources in case the 
host(s) are in maintenance mode.
If you set to "HOST_COMPONENT", it would ignore any host-level maintenance mode.
This is a really mysterious, undocumented part of Ambari, unfortunately.

Yusaku

From: Greg Hill mailto:greg.h...@rackspace.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Tuesday, March 3, 2015 9:32 AM
To: Sean Roberts mailto:srobe...@hortonworks.com>>, 
"user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Re: decommission multiple nodes issue

I have verified that if maintenance mode is set on a host, then it is ignored 
by the decommission process, but only if you try to decommission multiple hosts 
at the same time.  I'll open a bug.

Greg

From: Sean Roberts mailto:srobe...@hortonworks.com>>
Date: Monday, March 2, 2015 at 1:34 PM
To: Greg mailto:greg.h...@rackspace.com>>, 
"user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Re: decommission multiple nodes issue

Greg - Same here on submitting JSON. Although they are JSON documents you have 
to submit them as plain form. This is true across all of Ambari. I opened a bug 
for it a month back.


--
Hortonworks - We do Hadoop

Sean Roberts
Partner Solutions Engineer - EMEA
@seano

From: Greg Hill <mailto:greg.h...@rackspace.com>
Date: March 2, 2015 at 19:32:34
To: Sean Roberts ><mailto:srobe...@hortonworks.com>, 
user@ambari.apache.org<mailto:user@ambari.apache.org>><mailto:user@ambari.apache.org>
Subject:  Re: decommission multiple nodes issue

That causes a server error.  I’ve yet to see any part of the API that accepts 
JSON arrays like that as input; it’s almost always, if not always, a 
comma-separated string like I posted.  Many methods even return double-encoded 
JSON values (i.e. “key”: “[\”value1\”,\”value2\”]").  It’s kind of annoying and 
inconsistent, honestly, and not documented anywhere.  You just have to have 
your client code choke on it and then go add another data[key] = 
json.loads(data[key]) in the client to account for it.

I am starting to think it’s because I set the nodes into maintenance mode 
first, as doing the decommission command manually from the client works fine 
when the nodes aren’t in maintenance mode.  I’ll keep digging, I guess, but it 
is weird that the exact same command worked this time (the commandArgs are 
identical to the one that did nothing).

Greg

From: Sean Roberts mailto:srobe...@hortonworks.com>>
Date: Monday, March 2, 2015 at 1:22 PM
To: Greg mailto:greg.h...@rackspace.com>>, 
"user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Re: decommission multiple nodes issue


Racker Greg - I’m not familiar with the decommissioning API, 

Re: Trigger a script after cluster installation

2015-03-04 Thread Yusaku Sako
Do you see your new services through the Ambari REST API?

http://ambari-server:8080/api/v1/stacks/HDP/versions/2.2/services

Yusaku

From: Giovanni Paolo Gibilisco mailto:gibb...@gmail.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Wednesday, March 4, 2015 8:15 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Re: Trigger a script after cluster installation

Thank you, this was very useful but I'm facing another issue now:

I have followed the guide at
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=38571133#Overview(Ambari1.5.0orlater)-DefiningaServiceandComponents

and added a dummy service to Ambari service stacks, in particular to HDP 2.2. 
However after restarting the ambari server the "add service" button remains 
disabled. I noticed that Ambari made an archive in the new folder i created for 
the service, so I assume it distributed it to the other agents.

How one might mitigate this problem?

On Tue, Mar 3, 2015 at 6:53 PM Jayush Luniya 
mailto:jlun...@hortonworks.com>> wrote:
You can implement a custom command for it.

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=38571133

"Ambari supports different commands scripts written in PYTHON. The type is used 
to know how to execute the command scripts. You can also create custom commands 
if there are other commands beyond the default lifecycle commands your 
component needs to support."

Regards
Jayush

From: Giovanni Paolo Gibilisco mailto:gibb...@gmail.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Tuesday, March 3, 2015 at 9:37 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Trigger a script after cluster installation

Hi,
I'm developing a script to configure hue accordinf to the configuration managed 
by Ambari. I would like to run this script after:

the installation of the cluster via ambari web interface is finished AND 
services have been started.

If there a way to understand when this process is finished?
Best,
Giovanni


Re: (Host Checks) Hostname Resolution Issues

2015-03-03 Thread Yusaku Sako
Hi Tao,

Can you run the following API call from the Ambari Server host?
Please replace the hostnames and jdk_location appropriately.

curl -i -uadmin:admin -H 'X-Requested-By: ambari' -d '{
  "RequestInfo": {
"action": "check_host",
"context": "Check host",
"parameters": {
  "check_execute_list": "host_resolution_check",
  "hosts": "c6401.ambari.apache.org, c6402.ambari.apache.org, 
c6403.ambari.apache.org",
  "jdk_location":"/usr/jdk64/jdk1.7.0_67",
  "threshold": "20"
}
  },
  "Requests/resource_filters": [
{
  "hosts": 
"c6401.ambari.apache.org,c6402.ambari.apache.org,c6403.ambari.apache.org"
}
  ]
}' http://localhost:8081/api/v1/requests

After you make the call, you will get a response like:
{
  "href" : "http://localhost:8081/api/v1/requests/32";,
  "Requests" : {
"id" : 32,
"status" : "InProgress"
  }
}

Note the "id", and make the following call (replace id 32 with the actual id 
you get):
curl -i -u admin:admin 
http://localhost:8080/api/v1/requests/32/tasks?fields=Tasks/*

Keep calling the above call until the checks complete.
You should be able to see what went wrong.

I hope this helps,
Yusaku

From: , Tao mailto:ta...@aware.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Friday, February 13, 2015 11:48 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: (Host Checks) Hostname Resolution Issues

Hi all,

I am trying to create a stack through Ambari Web "install wizard". At step 
"Confirm Hosts", all nodes pass "registration", but one node reports "Hostname 
Resolution Issues", with this detailed report:

##
# Hostname Resolution
#
# A newline delimited list of hostname resolution issues.
##
HOSTNAME RESOLUTION ISSUES
vm-galaxy03.aware.com could not resolve: .


I do not see any difference between this node (vm-galaxy03.aware.com) and 
others.  Could any one help?

Thanks,
-Tao



Re: decommission multiple nodes issue

2015-03-03 Thread Yusaku Sako
Sorry, the Wiki title changed:
https://cwiki.apache.org/confluence/display/AMBARI/API+to+decommission+DataNodes+and+NodeManagers

From: Yusaku Sako mailto:yus...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Tuesday, March 3, 2015 8:49 PM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>, Sean Roberts 
mailto:srobe...@hortonworks.com>>
Subject: Re: decommission multiple nodes issue

BTW, I've started a new Wiki on decommissioning DataNodes: 
https://cwiki.apache.org/confluence/display/AMBARI/API+to+decommission+DataNodes

Yusaku

From: Yusaku Sako mailto:yus...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Tuesday, March 3, 2015 7:41 PM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>, Sean Roberts 
mailto:srobe...@hortonworks.com>>
Subject: Re: decommission multiple nodes issue

Hi Greg,

This is actually by design.
If you want to decommission all DataNodes regardless of their host maintenance 
mode, you need to change "RequestInfo/level" from "CLUSTER" to "HOST_COMPONENT".
When you set the "level" to "CLUSTER", bulk operations (in this case 
decommission) would be skipped on the matching target resources in case the 
host(s) are in maintenance mode.
If you set to "HOST_COMPONENT", it would ignore any host-level maintenance mode.
This is a really mysterious, undocumented part of Ambari, unfortunately.

Yusaku

From: Greg Hill mailto:greg.h...@rackspace.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Tuesday, March 3, 2015 9:32 AM
To: Sean Roberts mailto:srobe...@hortonworks.com>>, 
"user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Re: decommission multiple nodes issue

I have verified that if maintenance mode is set on a host, then it is ignored 
by the decommission process, but only if you try to decommission multiple hosts 
at the same time.  I'll open a bug.

Greg

From: Sean Roberts mailto:srobe...@hortonworks.com>>
Date: Monday, March 2, 2015 at 1:34 PM
To: Greg mailto:greg.h...@rackspace.com>>, 
"user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Re: decommission multiple nodes issue

Greg - Same here on submitting JSON. Although they are JSON documents you have 
to submit them as plain form. This is true across all of Ambari. I opened a bug 
for it a month back.


--
Hortonworks - We do Hadoop

Sean Roberts
Partner Solutions Engineer - EMEA
@seano

From: Greg Hill <mailto:greg.h...@rackspace.com>
Date: March 2, 2015 at 19:32:34
To: Sean Roberts ><mailto:srobe...@hortonworks.com>, 
user@ambari.apache.org<mailto:user@ambari.apache.org>><mailto:user@ambari.apache.org>
Subject:  Re: decommission multiple nodes issue

That causes a server error.  I’ve yet to see any part of the API that accepts 
JSON arrays like that as input; it’s almost always, if not always, a 
comma-separated string like I posted.  Many methods even return double-encoded 
JSON values (i.e. “key”: “[\”value1\”,\”value2\”]").  It’s kind of annoying and 
inconsistent, honestly, and not documented anywhere.  You just have to have 
your client code choke on it and then go add another data[key] = 
json.loads(data[key]) in the client to account for it.

I am starting to think it’s because I set the nodes into maintenance mode 
first, as doing the decommission command manually from the client works fine 
when the nodes aren’t in maintenance mode.  I’ll keep digging, I guess, but it 
is weird that the exact same command worked this time (the commandArgs are 
identical to the one that did nothing).

Greg

From: Sean Roberts mailto:srobe...@hortonworks.com>>
Date: Monday, March 2, 2015 at 1:22 PM
To: Greg mailto:greg.h...@rackspace.com>>, 
"user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Re: decommission multiple nodes issue


Racker Greg - I’m not familiar with the decommissioning API, but if it’s 
consistent with the rest of Ambari, you’ll need to change from this:

"excluded_hosts": “slave-1.local,slave-2.local"

To this:

"excluded_hosts" : [ "slave-1.local","slave-2.local" ]


--
Hortonworks - We do Hadoop

Sean Roberts
Partner Solutions Engineer - EMEA
@seano

From: Greg Hill <mailto:greg.h...@rackspace.com>
Reply: 
user@ambari.apache.org<mailto:user@ambari.apache.org>><mailto:user@ambari.apache.org>

Re: decommission multiple nodes issue

2015-03-03 Thread Yusaku Sako
BTW, I've started a new Wiki on decommissioning DataNodes: 
https://cwiki.apache.org/confluence/display/AMBARI/API+to+decommission+DataNodes

Yusaku

From: Yusaku Sako mailto:yus...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Tuesday, March 3, 2015 7:41 PM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>, Sean Roberts 
mailto:srobe...@hortonworks.com>>
Subject: Re: decommission multiple nodes issue

Hi Greg,

This is actually by design.
If you want to decommission all DataNodes regardless of their host maintenance 
mode, you need to change "RequestInfo/level" from "CLUSTER" to "HOST_COMPONENT".
When you set the "level" to "CLUSTER", bulk operations (in this case 
decommission) would be skipped on the matching target resources in case the 
host(s) are in maintenance mode.
If you set to "HOST_COMPONENT", it would ignore any host-level maintenance mode.
This is a really mysterious, undocumented part of Ambari, unfortunately.

Yusaku

From: Greg Hill mailto:greg.h...@rackspace.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Tuesday, March 3, 2015 9:32 AM
To: Sean Roberts mailto:srobe...@hortonworks.com>>, 
"user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Re: decommission multiple nodes issue

I have verified that if maintenance mode is set on a host, then it is ignored 
by the decommission process, but only if you try to decommission multiple hosts 
at the same time.  I'll open a bug.

Greg

From: Sean Roberts mailto:srobe...@hortonworks.com>>
Date: Monday, March 2, 2015 at 1:34 PM
To: Greg mailto:greg.h...@rackspace.com>>, 
"user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Re: decommission multiple nodes issue

Greg - Same here on submitting JSON. Although they are JSON documents you have 
to submit them as plain form. This is true across all of Ambari. I opened a bug 
for it a month back.


--
Hortonworks - We do Hadoop

Sean Roberts
Partner Solutions Engineer - EMEA
@seano

From: Greg Hill <mailto:greg.h...@rackspace.com>
Date: March 2, 2015 at 19:32:34
To: Sean Roberts ><mailto:srobe...@hortonworks.com>, 
user@ambari.apache.org<mailto:user@ambari.apache.org>><mailto:user@ambari.apache.org>
Subject:  Re: decommission multiple nodes issue

That causes a server error.  I’ve yet to see any part of the API that accepts 
JSON arrays like that as input; it’s almost always, if not always, a 
comma-separated string like I posted.  Many methods even return double-encoded 
JSON values (i.e. “key”: “[\”value1\”,\”value2\”]").  It’s kind of annoying and 
inconsistent, honestly, and not documented anywhere.  You just have to have 
your client code choke on it and then go add another data[key] = 
json.loads(data[key]) in the client to account for it.

I am starting to think it’s because I set the nodes into maintenance mode 
first, as doing the decommission command manually from the client works fine 
when the nodes aren’t in maintenance mode.  I’ll keep digging, I guess, but it 
is weird that the exact same command worked this time (the commandArgs are 
identical to the one that did nothing).

Greg

From: Sean Roberts mailto:srobe...@hortonworks.com>>
Date: Monday, March 2, 2015 at 1:22 PM
To: Greg mailto:greg.h...@rackspace.com>>, 
"user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Re: decommission multiple nodes issue


Racker Greg - I’m not familiar with the decommissioning API, but if it’s 
consistent with the rest of Ambari, you’ll need to change from this:

"excluded_hosts": “slave-1.local,slave-2.local"

To this:

"excluded_hosts" : [ "slave-1.local","slave-2.local" ]


--
Hortonworks - We do Hadoop

Sean Roberts
Partner Solutions Engineer - EMEA
@seano

From: Greg Hill <mailto:greg.h...@rackspace.com>
Reply: 
user@ambari.apache.org<mailto:user@ambari.apache.org>><mailto:user@ambari.apache.org>
Date: March 2, 2015 at 19:08:13
To: 
user@ambari.apache.org<mailto:user@ambari.apache.org>><mailto:user@ambari.apache.org>
Subject:  decommission multiple nodes issue

I have some code for decommissioning datanodes prior to removal.  It seems to 
work fine with a single node, but with multiple nodes it fails.  When passing 
multiple hosts, I am putting the names in a comma-separated string, as seems to 
be the custom with other Ambari API commands.  I attempted to send it as a JSON 
array, but the server complained about that

Re: decommission multiple nodes issue

2015-03-03 Thread Yusaku Sako
Hi Greg,

This is actually by design.
If you want to decommission all DataNodes regardless of their host maintenance 
mode, you need to change "RequestInfo/level" from "CLUSTER" to "HOST_COMPONENT".
When you set the "level" to "CLUSTER", bulk operations (in this case 
decommission) would be skipped on the matching target resources in case the 
host(s) are in maintenance mode.
If you set to "HOST_COMPONENT", it would ignore any host-level maintenance mode.
This is a really mysterious, undocumented part of Ambari, unfortunately.

Yusaku

From: Greg Hill mailto:greg.h...@rackspace.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Tuesday, March 3, 2015 9:32 AM
To: Sean Roberts mailto:srobe...@hortonworks.com>>, 
"user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Re: decommission multiple nodes issue

I have verified that if maintenance mode is set on a host, then it is ignored 
by the decommission process, but only if you try to decommission multiple hosts 
at the same time.  I'll open a bug.

Greg

From: Sean Roberts mailto:srobe...@hortonworks.com>>
Date: Monday, March 2, 2015 at 1:34 PM
To: Greg mailto:greg.h...@rackspace.com>>, 
"user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Re: decommission multiple nodes issue

Greg - Same here on submitting JSON. Although they are JSON documents you have 
to submit them as plain form. This is true across all of Ambari. I opened a bug 
for it a month back.


--
Hortonworks - We do Hadoop

Sean Roberts
Partner Solutions Engineer - EMEA
@seano

From: Greg Hill 
Date: March 2, 2015 at 19:32:34
To: Sean Roberts >, 
user@ambari.apache.org>
Subject:  Re: decommission multiple nodes issue

That causes a server error.  I’ve yet to see any part of the API that accepts 
JSON arrays like that as input; it’s almost always, if not always, a 
comma-separated string like I posted.  Many methods even return double-encoded 
JSON values (i.e. “key”: “[\”value1\”,\”value2\”]").  It’s kind of annoying and 
inconsistent, honestly, and not documented anywhere.  You just have to have 
your client code choke on it and then go add another data[key] = 
json.loads(data[key]) in the client to account for it.

I am starting to think it’s because I set the nodes into maintenance mode 
first, as doing the decommission command manually from the client works fine 
when the nodes aren’t in maintenance mode.  I’ll keep digging, I guess, but it 
is weird that the exact same command worked this time (the commandArgs are 
identical to the one that did nothing).

Greg

From: Sean Roberts mailto:srobe...@hortonworks.com>>
Date: Monday, March 2, 2015 at 1:22 PM
To: Greg mailto:greg.h...@rackspace.com>>, 
"user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Re: decommission multiple nodes issue


Racker Greg - I’m not familiar with the decommissioning API, but if it’s 
consistent with the rest of Ambari, you’ll need to change from this:

"excluded_hosts": “slave-1.local,slave-2.local"

To this:

"excluded_hosts" : [ "slave-1.local","slave-2.local" ]


--
Hortonworks - We do Hadoop

Sean Roberts
Partner Solutions Engineer - EMEA
@seano

From: Greg Hill 
Reply: 
user@ambari.apache.org>
Date: March 2, 2015 at 19:08:13
To: 
user@ambari.apache.org>
Subject:  decommission multiple nodes issue

I have some code for decommissioning datanodes prior to removal.  It seems to 
work fine with a single node, but with multiple nodes it fails.  When passing 
multiple hosts, I am putting the names in a comma-separated string, as seems to 
be the custom with other Ambari API commands.  I attempted to send it as a JSON 
array, but the server complained about that.  Let me know if that is the wrong 
format.  The decommission request completes successfully, it just never writes 
the excludes file so no nodes are decommissioned.

This fails for mutiple nodes:

"RequestInfo": {
"command": "DECOMMISSION",
"context": "Decommission DataNode”),
"parameters": {"slave_type": “DATANODE", "excluded_hosts": 
“slave-1.local,slave-2.local"},
"operation_level": {
“level”: “CLUSTER”,
“cluster_name”: cluster_name
},
},
"Requests/resource_filters": [{
"service_name": “HDFS",
"component_name": “NAMENODE",
}],

But this works for a single node:

"RequestInfo": {
"command": "DECOMMISSION",
"context": "Decommission DataNode”),
"parameters": {"slave_type": “DATANODE", "excluded_hosts": 
“slave-1.loc

Re: Permanent changes to Ganglia config (how to)

2015-03-02 Thread Yusaku Sako
Hi Fabio,

Since Ambari manages these files, you need to make modifications to 
corresponding files under 
/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/GANGLIA on the 
Ambari Server instead of /etc/ganglia, /usr/lib/exec/hdp/ganglia, etc.  Once 
you modify appropriate files, you need to do "ambari-server restart" to let the 
Ambari Server reload these changes.  Then you go to Ambari Web UI and restart 
Ganglia to push the files out to the nodes.
The same procedure can be applied for other services that you might want to 
customize.

I hope this helps.

Yusaku

From: "Fabio C." mailto:anyte...@gmail.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Sunday, March 1, 2015 10:41 PM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Permanent changes to Ganglia config (how to)

Hi everyone,

I was trying to change the Ganglia sampling rate (let's say from 15 to 5 
seconds). I made some changes on the ambari-server node in the file 
/etc/ganglia/hdp/gmetad.conf but, when I restart the Ganglia service in Ambari, 
it is reverted to the original one.
Then I modified the file /usr/libexec/hdp/ganglia/gmetadLib.sh, which looks 
like the generator of the gmetad.conf, but even this file is then reverted to 
the original one.

How can I make a permanent change to the gmetad.conf? Will this changes be 
broadcasted to all the other nodes?

Thanks a lot

Fabio


Re: Ambari based uninstall

2015-02-26 Thread Yusaku Sako
Hi Steve,

On all your hosts, try running "python 
/usr/lib/python2.6/site-packages/ambari_agent/HostCleanup.py".
This should delete binaries, directories, alternatives, etc.
After that, run "ambari-server reset" on the ambari-server host.  This 
initializes the Ambari postgres db.

Yusaku

From: Steve Edison mailto:sediso...@gmail.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Thursday, February 26, 2015 6:02 PM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Fwd: Ambari based uninstall

Team,

I am using Ambari to install a cluster which now needs to be deleted and 
re-installed.

Is there a clean way to Uninstall the cluster, clean up all the binaries from 
all the  nodes and do a fresh install ?

There is no data on the cluster, so nothing to worry.

Thanks in advance



Re: HDP Questions

2015-02-17 Thread Yusaku Sako
Hi Shaik,

http://hortonworks.com/community/forums/ is the right place to ask HDP-related 
questions.

Thanks,
Yusaku


From: Hadoop Solutions mailto:munna.had...@gmail.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Tuesday, February 17, 2015 8:13 PM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: HDP Questions

Hi,

Please let me know, what is the right mailing list to post HDP related issues.

Thanks,
Shaik


Re: Advice on building Ambari from source?

2015-02-02 Thread Yusaku Sako
Hi Jun,

That someone was me :)
It kept reporting false build failures (when the non-docker one was passing), 
so I had disabled it as it was adding a lot of noise on JIRAs.

Yusaku

From: jun aoki mailto:ja...@apache.org>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Monday, February 2, 2015 2:09 PM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Cc: "bec...@hellmar-becker.de" 
mailto:bec...@hellmar-becker.de>>
Subject: Re: Advice on building Ambari from source?

And, OSS build passes so it might be an environment issue.
https://builds.apache.org/view/A-D/view/Ambari/job/Ambari-trunk-Commit/

I wanted to refer the docker version of Ambari Build job but it is disabled.
(I'm running #841 manually but it has been disabled, assuming there were some 
issues with it and someone disabled it)

The docker container provides identical environment on any nodes so long as 
Docker is running so that you wont see issues based upon environmental 
difference, like python version, java version, yum package vs apt-get package 
etc.
Let me see what I can do.




On Mon, Feb 2, 2015 at 1:55 PM, Benoit Perroud 
mailto:ben...@noisette.ch>> wrote:
I built the branch 1.7 without any problems


Le vendredi 30 janvier 2015, Hellmar Becker 
mailto:bec...@hellmar-becker.de>> a écrit :

Hi all,

I have been trying to build Ambari from the newest sources following the 
instructions in the Ambari wiki but I seem to be failing in the tests for 
ambari-server.

Here is a snippet from my build log:



Results :

Failed tests:   
testRequestURL(org.apache.ambari.server.view.HttpImpersonatorImplTest): 
expected:<[Dummy text from HTTP response]> but was:<[]>
  
testRequestURLWithCustom(org.apache.ambari.server.view.HttpImpersonatorImplTest):
 expected:<[Dummy text from HTTP response]> but was:<[]>
  
testHeartbeatStateCommandsEnqueueing(org.apache.ambari.server.agent.TestHeartbeatMonitor):
 HeartbeatMonitor should be already stopped

Tests in error:
  
testDeadlockBetweenImplementations(org.apache.ambari.server.state.cluster.ClusterDeadlockTest):
 test timed out after 3 milliseconds
  
testUpdateRepoUrlController(org.apache.ambari.server.controller.AmbariManagementControllerTest):
 Could not access base url . 
http://public-repo-1.hortonworks.com/HDP-1.1.1.16/repos/centos5 . 
java.net.UnknownHostException: 
public-repo-1.hortonworks.com

Tests run: 2628, Failures: 3, Errors: 2, Skipped: 15



This is on CentOS 6.5, using the trunk branch. Or is there a defined branch 
that I should check out for a successful build?

Regards,
Hellmar


Hellmar Becker
Edmond Audranstraat 55
NL-3543BG Utrecht
mail: bec...@hellmar-becker.de
mobile: +31 6 29986670





--
-jun


Re: moving master nodes on ambari 1.7?

2015-01-31 Thread Yusaku Sako


From: Yusaku Sako mailto:yus...@hortonworks.com>>
Date: Saturday, January 31, 2015 9:46 PM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Re: moving master nodes on ambari 1.7?

Hi Jun,

1.7 did not support moving masters except for NameNode, SNameNode, and 
ResourceManager, I believe.
The ability to move Oozie Server, HiveServer2, Hive Metastore, Hive Metastore 
DB, WebHCat Server, and App Timeline Server (possibly others) were added for 
2.0 (the work has already been done in trunk).

Yusaku



From: jun aoki mailto:ja...@apache.org>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Friday, January 30, 2015 12:36 PM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: moving master nodes on ambari 1.7?

Hi Ambari team,

I have figured out namenode and resource manager can be moved to another node 
on Ambari 1.7.
Are there any components that Ambari supports to move?
e.g. History Server, HBase Master, Oozie Server Hive Metastore, Hive Server.

--
-jun


Re: Ambari dashboard display no data

2015-01-30 Thread Yusaku Sako
Hi,

Is httpd running on 
vm-galaxy04.aware.com?

Does this work from the Ambari server host?

curl http://vm-galaxy04.aware.com/ganglia/graph.php?g=cpu_report&json=1

Also, can you hit 
http://vm-galaxy04.aware.com/ganglia/graph.php?g=cpu_report&json=1 from the 
browser?  Do you see anything?

Yusaku

From: , Tao mailto:ta...@aware.com>>
Reply-To: "'user@ambari.apache.org'" 
mailto:user@ambari.apache.org>>
Date: Friday, January 30, 2015 7:04 AM
To: "'user@ambari.apache.org'" 
mailto:user@ambari.apache.org>>
Subject: RE: Ambari dashboard display no data

More info about this dashboard "no metrics" issue:

I restarted Ganglia server and monitor from Ambari Web UI (from each host) 
successfully.  On Ganglia server host, I run command "service hdp-gmond 
status", and get response like this:

===
Checking status of hdp-gmond...
===
/usr/sbin/gmond for cluster HDPSlaves running with PID 30136
/usr/sbin/gmond for cluster HDPNimbus running with PID 30164
/usr/sbin/gmond for cluster HDPSupervisor running with PID 30192

So I believe all 3 gmond services are up and running properly.

Also on Ganglia server (gmetad) host, I can see all metrics rrd files (like 
cpu_num.rrd, disk_free.rrd, etc) are under /var/lib/ganglia/rrds.

But still, dashboard does not show any graph, instead only display "No Data. 
There was no data available. Possible reasons include inaccessible Ganglia 
service."

Any advice?

From: Yu, Tao [mailto:ta...@aware.com]
Sent: Thursday, January 29, 2015 11:24 AM
To: 'user@ambari.apache.org'
Subject: Ambari dashboard display no data

Hi all,

I have a newly installed HDP-2.2 on a small cluster (2 hosts). The installation 
( via install wizard), including services like Ganglia / Zookeeper / Storm etc, 
went smoothly (CentOS 6.5)  and ambari-server and all desired services are up 
and running. But when I login to the Ambari web UI, the dashboard shows nothing 
for any of the standard Metrics (Memory / Network / CPU /Cluster load), instead 
the UI shows below message:

No data There was no data available.  Possible reasons include inaccessible 
Ganglia service

I then checked both hosts where Ganglia server daemon (gmetad) and client 
daemon (gmond) run, all daemons are running:

#ps -ef |grep gmetad
nobody   31970 1  0 10:38 ?00:00:02 /usr/sbin/gmetad 
--conf=/etc/ganglia/hdp/gmetad.conf --pid-file=/var/run/ganglia/hdp/gmetad.pid

#ps -ef | grep gmond
nobody   13681 1  0 10:38 ?00:00:19 /usr/sbin/gmond 
--conf=/etc/ganglia/hdp/HDPSlaves/gmond.core.conf 
--pid-file=/var/run/ganglia/hdp/HDPSlaves/gmond.pid

But when I check the ambari-server.log, got some ERROR message with Java 
exception:

ERROR [qtp1835310612-22] GangliaReportPropertyProvider:153 - Caught exception 
getting Ganglia metrics : java.net.ConnectException: Connection refused : 
spec=http://vm-galaxy04.aware.com/ganglia/graph.php?g=cpu_report&json=1

The ERROR message seems like network connection issue, but the hosts look OK in 
the network. I have tried restarting Ganglia services with no luck, restarted 
all services but still the same.

Does anyone have any ideas how I solve the Java Exception and get the dashboard 
to work properly?

Thank you!


Re: "confirm host" always fails when "deploy the cluster using Ambari Web UI"

2015-01-26 Thread Yusaku Sako
Tao,

Can you look into /var/log/ambari-agent/ambari-agent.log?
It shoud help identify errors.

Yusaku

On Mon, Jan 26, 2015 at 11:11 AM, Yu, Tao  wrote:

>  Hi Jayush,
>
>
>
> Target OS:   CentOS release 6.6
>
> Python:v2.6.6
>
>
>
> Thanks,
>
> -Tao
>
>
>
> *From:* Jayush Luniya [mailto:jlun...@hortonworks.com]
> *Sent:* Monday, January 26, 2015 1:56 PM
> *To:* user@ambari.apache.org
> *Subject:* Re: "confirm host" always fails when "deploy the cluster using
> Ambari Web UI"
>
>
>
> Hi Yu,
>
> What is your target OS and python version you are using.
>
> Regards
>
> Jayush
>
>
>
> On Mon, Jan 26, 2015 at 7:00 AM, Yu, Tao  wrote:
>
> Hi all,
>
>
>
> I am deploying the cluster using Ambari Web UI (Ambari v1.7). But at step 
> "*confirm
> host*", it always fails without any specific error message. I only have 1
> node in the cluster (the simplest case), and provide the "Fully qualified
> Domain Name" (like ..com), with SSH private key
> file (id_rsa.pub).
>
>
>
> Do I miss anything to pass this step?
>
>
>
> Thanks
>
>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: How to Handle - Heartbeat Lost for Services ?

2015-01-23 Thread Yusaku Sako
Also, you want to make sure that the clocks are sync for all hosts (ntpd
helps).

Yusaku

On Fri, Jan 23, 2015 at 4:20 PM, Alejandro Fernandez <
afernan...@hortonworks.com> wrote:

> Hi Krish,
>
> Check that the agent can ping the server, and that the /etc/hosts file has
> fully qualified names.
> Have there been any recent network interface changes?
>
> Thanks,
> Alejandro
>
> On Fri, Jan 23, 2015 at 4:15 PM, Krish Donald 
> wrote:
>
>> Hi,
>>
>> I am setting up the Hadoop cluster using ambari but for few services , I
>> am getting status as Heatbeat lost.
>>
>> Host is up and running and other services are running on the same host.
>>
>> How should I take care of this?
>>
>> Thanks
>> Krish
>>
>>
>
>
> --
> [image: Hortonworks, Inc.]  *Alejandro Fernandez
> **Engineering - Ambari*
> 786.303.7149
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Config the directory from ambari web

2015-01-21 Thread Yusaku Sako
Looks like this was introduced via
https://issues.apache.org/jira/browse/AMBARI-4162.
Ambari should not disallow directories like /home/... from being used
in config properties if the user wishes.
I think the intent was to not include /home as one of the mount points
to be considered when making default recommendations for config
properties related to directories.

Liulu, do you want to go ahead and file a bug for that?

Thanks,
Yusaku

On Wed, Jan 21, 2015 at 11:22 AM, jun aoki  wrote:
> Good point. It seems the check comes from here
> https://github.com/apache/ambari/blob/trunk/ambari-web/app/models/service_config.js#L963
>
> Does anyone know why this restriction is taken place?
>
> On Wed, Jan 21, 2015 at 2:28 AM, 刘禄  wrote:
>>
>> Hello.
>>
>> Why ambari consider the directory "Can't start with home(s)"?
>>
>> I want to set the directory to be like “/home/xxx/dir”, but ambari
>> refuses.
>>
>> I think directory can be any dir that user want.
>>
>>
>> --
>> liulu
>>
>>
>
>
>
> --
> -jun

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Automating the security setup

2015-01-15 Thread Yusaku Sako
Hi Becker,

We are targeting the upcoming Ambari 2.0 for adding support to Kerberize
the cluster in an automated fashion with only a few REST API calls and user
intervention: https://issues.apache.org/jira/browse/AMBARI-7204

It would automate Kerberos client installation, principal/keytab generation
and distribution, etc., and is designed to work with an external MIT KDC as
well as Active Directory.  It would also make it much easier to handle
adding new services / hosts /components on an already Kerberized cluster.
You will not have to directly and explicitly set a bunch of Kerberos
related parameters in many configurations across different services.
The legacy implementation using the CSV file is going away.
A lot of the code to make this work is already on trunk but it is still
actively being worked on as we speak.

Yusaku

On Thu, Jan 15, 2015 at 8:29 AM, Hellmar Becker 
wrote:

> Hello,
>
> At ING, we are currently automating deployment of a HDP-based cluster
> using Ambari blueprints and the REST API. We would like to also enable
> Kerberos based security in this way. A couple of questions:
>
> - Is it possible to enable security with a single REST call which would be
> equivalent to the "Enable Security" button in the GUI?
>
> - Or, would we need to figure out the settings for each service and
> incorporate those in our blueprint?
>
> - As for the keytab generation, Ambari creates a CSV that lists all
> principals that need keytabs, along with the locations and permissions for
> the keytab files. Can this CSV file be generated through a REST call? Or
> any other suggestions how to automate this steps?
>
> Thanks for any ideas that you can share on these issues.
>
> 
> Hellmar Becker
> Edmond Audranstraat 55
> NL-3543BG Utrecht
> mail: bec...@hellmar-becker.de
> mobile: +31 6 29986670
> 
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: problem with historyserver on secondary namenode

2014-12-23 Thread Yusaku Sako
No worries.  Glad you figured it out.

Yusaku

On Tue, Dec 23, 2014 at 12:33 PM, Greg Hill  wrote:

>  The problem is the namenode is only listening on localhost:
>
>  [root@master-1 ~]# netstat -pl --numeric-ports --numeric-hosts | grep
> 10975
> tcp0  0 127.0.0.1:8020  0.0.0.0:*
>   LISTEN  10975/java
> tcp0  0 127.0.0.1:50070 0.0.0.0:*
>   LISTEN  10975/java
> udp0  0 0.0.0.0:50091   0.0.0.0:*
>   10975/java
>
>  Likely this is caused by a misconfiguration on our end.  Sorry for the
> false alarm.
>
>  Greg
>
>   From: Greg 
> Reply-To: "user@ambari.apache.org" 
> Date: Tuesday, December 23, 2014 2:01 PM
> To: "user@ambari.apache.org" 
> Subject: Re: problem with historyserver on secondary namenode
>
>   I may have been hasty in my diagnosis.  It doesn't appear to start even
> after hdfs is up and running fine.  I'll dig more and see if I can figure
> out the real culprit here.
>
>  Greg
>
>   From: Greg 
> Reply-To: "user@ambari.apache.org" 
> Date: Tuesday, December 23, 2014 1:51 PM
> To: "user@ambari.apache.org" 
> Subject: problem with historyserver on secondary namenode
>
>   Trying to use ambari 1.7.0 to provision and hdp 2.2 cluster.  The
> layout I'm using has the yarn history server on the same host as the
> secondary namenode (the primary namenode is on another host), but it fails
> to start because it tries to interact with hdfs before hdfs is ready.
> Here's a gist with the error:
>
>  https://gist.github.com/jimbobhickville/a25cef3a2355fc273984
>
>  Is this a bug in Ambari?  Is there any way for me to control this
> behavior via configuration or in my stack layout?  I imagine this type of
> scenario has to have come up previously.
>
>  Greg
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: HTTPS with Ambari

2014-12-17 Thread Yusaku Sako
The guys who wrote the agent can chime in, but AFAIK it has not changed
since 1.2.0.

Thanks,
Yusaku

On Wed, Dec 17, 2014 at 3:59 PM, Aaron Cody  wrote:
>
>  "Server-agent communication is always over HTTPS over port 8440/8441.”
>
>  was this true in ambari 1.2.4 ?
>
>  thanks
>
>   From: Yusaku Sako 
> Reply-To: "user@ambari.apache.org" 
> Date: Wednesday, December 17, 2014 at 3:33 PM
> To: "user@ambari.apache.org" 
> Subject: Re: HTTPS with Ambari
>
>   Hi Aaron,
>
>  Server-agent communication is always over HTTPS over port 8440/8441.
> "Enable HTTPS for Ambari Server" is for server-client communication.  By
> default, it is HTTP over 8080.
>
>  Yusaku
>
> On Wed, Dec 17, 2014 at 3:18 PM, Aaron Cody  wrote:
>>
>>  does selecting ‘Enable HTTPS for Ambari Server’ imply that the
>> server-agent connections will also be HTTPS ?
>>
>>  TIA
>>
>>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: HTTPS with Ambari

2014-12-17 Thread Yusaku Sako
Hi Aaron,

Server-agent communication is always over HTTPS over port 8440/8441.
"Enable HTTPS for Ambari Server" is for server-client communication.  By
default, it is HTTP over 8080.

Yusaku

On Wed, Dec 17, 2014 at 3:18 PM, Aaron Cody  wrote:
>
>  does selecting ‘Enable HTTPS for Ambari Server’ imply that the
> server-agent connections will also be HTTPS ?
>
>  TIA
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: problem starting the Ambari server after running upgrade. Should I remove postgre?

2014-12-15 Thread Yusaku Sako
Hi there,

Do you mean you are running Ambari 1.6.1 and you are trying to upgrade to
Ambari 1.7.0?  (There is no Ambari 1.7.1).
What steps did you follow for upgrading?  Did you download the repo file
for 1.7.0 and performed "yum upgrade ambari-server ambari-agent", etc?

Yusaku

On Mon, Dec 15, 2014 at 8:35 AM, David Novogrodsky <
david.novogrod...@gmail.com> wrote:
>
> I am running Ambari version 1.7.1.  I have been having some problem with
> Ambari recognizing the nodes on the network.  I decided to try upgrading
> using this command:
> >>ambari-server upgrade
>
> When I tried to start the server, ambari-server start, I got this error:
>   GNU nano 2.0.9 File:
> /var/log/ambari-server/ambari-server.out
>
>
> [EL Warning]: metadata: 2014-12-15
> 10:28:33.117--ServerSession(335791427)--The reference column name
> [resource_type_id] mapped on the element [field permissions] does not
> correspond to a valid$
> [EL Info]: 2014-12-15 10:28:35.741--ServerSession(335791427)--EclipseLink,
> version: Eclipse Persistence Services - 2.4.0.v20120608-r11652
> [EL Info]: connection: 2014-12-15
> 10:28:36.226--ServerSession(335791427)--file:/usr/lib/ambari-server/ambari-server-1.7.0.169.jar_ambari-server_url=jdbc:postgresql://localhost/postgres_user=am$
> [EL Warning]: 2014-12-15 10:28:36.343--UnitOfWork(1435878060)--Exception
> [EclipseLink-4002] (Eclipse Persistence Services - 2.4.0.v20120608-r11652):
> org.eclipse.persistence.exceptions.Database$
> Internal Exception: org.postgresql.util.PSQLException: ERROR: relation
> "metainfo" does not exist
>   Position: 46
> Error Code: 0
> Call: SELECT "metainfo_key", "metainfo_value" FROM metainfo WHERE
> ("metainfo_key" = ?)
> bind => [1 parameter bound]
> Query: ReadObjectQuery(name="readObject" referenceClass=MetainfoEntity
> sql="SELECT "metainfo_key", "metainfo_value" FROM metainfo WHERE
> ("metainfo_key" = ?)")
>
> I am not sure what are the next steps.
> David Novogrodsky
> david.novogrod...@gmail.com
> http://www.linkedin.com/in/davidnovogrodsky
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Problem with Ambari 1.7 recognizing hosts running CentOS 6

2014-12-15 Thread Yusaku Sako
Did you change the FQDNs like I proposed, like namenode.localdomain, rather
than localhost.namenode?
Did you ensure that the 3 commands returned the results as shown?
Can each host resolve all the other hosts by name?

If you want to get a cluster up and running on VMs, the best bet is to use:
https://cwiki.apache.org/confluence/display/AMBARI/Quick+Start+Guide

This sets up all /etc/hosts and other settings in the way you want.
Then you can see how these VMs are being set up and mimic on your VMs if
you'd rather set them up from scratch.

I hope this helps.
Yusaku


On Mon, Dec 15, 2014 at 8:18 AM, David Novogrodsky <
david.novogrod...@gmail.com> wrote:
>
> Ok, I removed the multiple instances onf localhost.namenode.  It now only
> appears on one line in the hosts file.
>
> The main ambari server still cannot see the data nodes nor the node Ambari
> is on.  Ambari is on the namenode.  When I run the install, the install
> program can not connect to any node in the network.
>
> Also I tried running /etc/init.d/network restart on one of the nodes;
> datanode10 ( a virtual machine).  Now that node cannot connect to the
> internetI would like to send you the information but I am having
> problems setting the document from the virtual machine.
>
> I do not have a DNS.  These machines have hardwired IP addresses and names
> in the host file. Did runn /etc/init.d/network restart break the connection?
>
>
> David Novogrodsky
> david.novogrod...@gmail.com
> http://www.linkedin.com/in/davidnovogrodsky
>
> On Sat, Dec 13, 2014 at 12:46 AM, Yusaku Sako 
> wrote:
>>
>> You can just make the changes in /etc/hosts.  You might also
>> change /etc/sysconfig/network and run /etc/init.d/network restart.
>>
>> Then make sure that running the 3 commands return expected results.
>>
>> Yusaku
>>
>> On Fri, Dec 12, 2014 at 9:06 PM, David Novogrodsky <
>> david.novogrod...@gmail.com> wrote:
>>>
>>> ​When I installed the CentOS on the machines, I chose those name,
>>> localhost.datanode01...and so on.  You mean I have to reinstall CentOS on
>>> the machines again?
>>>
>>> Can I just make the changes in the host files?
>>>
>>> Will I need to recreate the SSH keys?.​
>>>
>>> David Novogrodsky
>>> david.novogrod...@gmail.com
>>> http://www.linkedin.com/in/davidnovogrodsky
>>>
>>> On Fri, Dec 12, 2014 at 6:21 PM, Yusaku Sako 
>>> wrote:
>>>
>>>> I would set it up like this:
>>>>
>>>> 127.0.0.1 localhost localhost.localdomain localhost4
>>>> localhost4.localdomain4*   <- do not list the hostname here. *
>>>> ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
>>>> xxx.xxx.200.144 datanode10.localdomain
>>>> xxx.xxx.200.107 datanode01.localdomain
>>>> xxx.xxx.200.143 namenode.localdomain namenode
>>>>
>>>> With this change:
>>>> * *hostname -f* should display *namenode.localdomain*
>>>> * *hostname* should display *namenode*
>>>> * *python -c 'import socket; print socket.getfqdn()' *should display
>>>> *namenode.localdomain*
>>>>
>>>> I hope this helps.
>>>> Yusaku
>>>>
>>>> On Fri, Dec 12, 2014 at 3:52 PM, David Novogrodsky <
>>>> david.novogrod...@gmail.com> wrote:
>>>>>
>>>>> All,
>>>>>
>>>>> I am having a problem with Ambari.
>>>>> I am trying to use Ambari to install Hadoop to a three node cluster.
>>>>> the name node is where the Ambari server is located. I am getting this
>>>>> error:
>>>>> ERROR 2014-12-12 17:39:56,963 main.py:137 – Ambari agent machine
>>>>> hostname (localhost.localdomain) does not match expected ambari server
>>>>> hostname (namenode). Aborting registration. Please check hostname, 
>>>>> hostname
>>>>> -f and /etc/hosts file to confirm your hostname is setup correctly
>>>>> ‘, None)
>>>>>
>>>>> Here is the contents of my hosts file:
>>>>> 127.0.0.1 localhost localhost.localdomain localhost4
>>>>> localhost4.localdomain4 localhost.namenode namenode
>>>>> ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
>>>>> xxx.xxx.200.144 localhost.datanode10
>>>>> xxx.xxx.200.107 localhost.datanode01
>>>>> xxx.xxx.200.143 localhost.namenode namenode
>>>>>
>>>>> I am not sure w

Re: Problem with Ambari 1.7 recognizing hosts running CentOS 6

2014-12-12 Thread Yusaku Sako
You can just make the changes in /etc/hosts.  You might also
change /etc/sysconfig/network and run /etc/init.d/network restart.

Then make sure that running the 3 commands return expected results.

Yusaku

On Fri, Dec 12, 2014 at 9:06 PM, David Novogrodsky <
david.novogrod...@gmail.com> wrote:
>
> ​When I installed the CentOS on the machines, I chose those name,
> localhost.datanode01...and so on.  You mean I have to reinstall CentOS on
> the machines again?
>
> Can I just make the changes in the host files?
>
> Will I need to recreate the SSH keys?.​
>
> David Novogrodsky
> david.novogrod...@gmail.com
> http://www.linkedin.com/in/davidnovogrodsky
>
> On Fri, Dec 12, 2014 at 6:21 PM, Yusaku Sako 
> wrote:
>
>> I would set it up like this:
>>
>> 127.0.0.1 localhost localhost.localdomain localhost4
>> localhost4.localdomain4*   <- do not list the hostname here. *
>> ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
>> xxx.xxx.200.144 datanode10.localdomain
>> xxx.xxx.200.107 datanode01.localdomain
>> xxx.xxx.200.143 namenode.localdomain namenode
>>
>> With this change:
>> * *hostname -f* should display *namenode.localdomain*
>> * *hostname* should display *namenode*
>> * *python -c 'import socket; print socket.getfqdn()' *should display
>> *namenode.localdomain*
>>
>> I hope this helps.
>> Yusaku
>>
>> On Fri, Dec 12, 2014 at 3:52 PM, David Novogrodsky <
>> david.novogrod...@gmail.com> wrote:
>>>
>>> All,
>>>
>>> I am having a problem with Ambari.
>>> I am trying to use Ambari to install Hadoop to a three node cluster. the
>>> name node is where the Ambari server is located. I am getting this error:
>>> ERROR 2014-12-12 17:39:56,963 main.py:137 – Ambari agent machine
>>> hostname (localhost.localdomain) does not match expected ambari server
>>> hostname (namenode). Aborting registration. Please check hostname, hostname
>>> -f and /etc/hosts file to confirm your hostname is setup correctly
>>> ‘, None)
>>>
>>> Here is the contents of my hosts file:
>>> 127.0.0.1 localhost localhost.localdomain localhost4
>>> localhost4.localdomain4 localhost.namenode namenode
>>> ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
>>> xxx.xxx.200.144 localhost.datanode10
>>> xxx.xxx.200.107 localhost.datanode01
>>> xxx.xxx.200.143 localhost.namenode namenode
>>>
>>> I am not sure what the problem is. Since there are only four steps to
>>> run ambari there is not a lot of background to determine the cause of this
>>> problem.
>>>
>>> David Novogrodsky
>>> david.novogrod...@gmail.com
>>> http://www.linkedin.com/in/davidnovogrodsky
>>>
>>
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Problem with Ambari 1.7 recognizing hosts running CentOS 6

2014-12-12 Thread Yusaku Sako
I would set it up like this:

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4*
  <- do not list the hostname here. *
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
xxx.xxx.200.144 datanode10.localdomain
xxx.xxx.200.107 datanode01.localdomain
xxx.xxx.200.143 namenode.localdomain namenode

With this change:
* *hostname -f* should display *namenode.localdomain*
* *hostname* should display *namenode*
* *python -c 'import socket; print socket.getfqdn()' *should display
*namenode.localdomain*

I hope this helps.
Yusaku

On Fri, Dec 12, 2014 at 3:52 PM, David Novogrodsky <
david.novogrod...@gmail.com> wrote:
>
> All,
>
> I am having a problem with Ambari.
> I am trying to use Ambari to install Hadoop to a three node cluster. the
> name node is where the Ambari server is located. I am getting this error:
> ERROR 2014-12-12 17:39:56,963 main.py:137 – Ambari agent machine hostname
> (localhost.localdomain) does not match expected ambari server hostname
> (namenode). Aborting registration. Please check hostname, hostname -f and
> /etc/hosts file to confirm your hostname is setup correctly
> ‘, None)
>
> Here is the contents of my hosts file:
> 127.0.0.1 localhost localhost.localdomain localhost4
> localhost4.localdomain4 localhost.namenode namenode
> ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
> xxx.xxx.200.144 localhost.datanode10
> xxx.xxx.200.107 localhost.datanode01
> xxx.xxx.200.143 localhost.namenode namenode
>
> I am not sure what the problem is. Since there are only four steps to run
> ambari there is not a lot of background to determine the cause of this
> problem.
>
> David Novogrodsky
> david.novogrod...@gmail.com
> http://www.linkedin.com/in/davidnovogrodsky
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: ambari-server 1.7.0 can't start

2014-12-08 Thread Yusaku Sako
Have you tried running "ambari-server reset"?  This will reinitialize the
DB schema.

Yusaku

On Mon, Dec 8, 2014 at 4:36 PM, guxiaobo1982  wrote:

> This is a new installation, I even uninstalled postgresql before
> installing ambari-server.
>
>
> -- Original --
> *From: * "Jonathan Hurley";;
> *Send time:* Tuesday, Dec 9, 2014 2:30 AM
> *To:* "user";
> *Subject: * Re: ambari-server 1.7.0 can't start
>
> The database columns that it’s having a problem with are part of the 1.7.0
> upgrade process. If you were upgrading from a prior version, you’d need to
> run ‘ambari-server upgrade’ to ensure that your database was upgraded to
> 1.7.0. If this was a new installation, then perhaps you had an Ambari
> installation prior to this that left the postgres database on the server.
> In this case, I’d ensure that there are no Ambari tables in your database
> when installing a new instance of 1.7.0.
>
> On Dec 8, 2014, at 1:48 AM, guxiaobo1982  wrote:
>
> Hi,
>
> I installed a new ambari-server 1.7.0 instance followed by
> https://cwiki.apache.org/confluence/display/AMBARI/Install+Ambari+1.7.0+from+Public+Repositories
>
> but the ambari-server start command failed with the following error
> messages:
>
> [root@ambari profile.d]# ambari-server start
>
> Using python  /usr/bin/python2.6
>
> Starting ambari-server
>
> Ambari Server running with 'root' privileges.
>
> Organizing resource files at /var/lib/ambari-server/resources...
>
> Server PID at: /var/run/ambari-server/ambari-server.pid
>
> Server out at: /var/log/ambari-server/ambari-server.out
>
> Server log at: /var/log/ambari-server/ambari-server.log
>
> Waiting for server start
>
> ERROR: Exiting with exit code -1.
>
> REASON: Ambari Server java process died with exitcode 255. Check
> /var/log/ambari-server/ambari-server.out for more information.
>
> [root@ambari profile.d]# more /var/log/ambari-server/ambari-server.out
>
> [EL Warning]: metadata: 2014-12-08
> 12:48:04.141--ServerSession(1978541334)--The reference column name
> [resource_type_id] mapped on the element [field permissions] does not
> correspond
>
> to a valid id or basic field/column on the mapping reference. Will use
> referenced column name as provided.
>
> [EL Info]: 2014-12-08
> 12:48:06.126--ServerSession(1978541334)--EclipseLink, version: Eclipse
> Persistence Services - 2.4.0.v20120608-r11652
>
> [EL Info]: connection: 2014-12-08
> 12:48:06.446--ServerSession(1978541334)--file:/usr/lib/ambari-server/ambari-server-1.7.0.169.jar_ambari-server_url=jdbc:postgresql://localhost/ambari
>
> _user=ambari login successful
>
> [EL Warning]: 2014-12-08 12:48:08.867--UnitOfWork(1666151420)--Exception
> [EclipseLink-4002] (Eclipse Persistence Services - 2.4.0.v20120608-r11652):
> org.eclipse.persistence.exceptions
>
> .DatabaseException
>
> Internal Exception: org.postgresql.util.PSQLException: ERROR: column
> "resource_id" does not exist
>
>   位置:114
>
> Error Code: 0
>
> Call: SELECT cluster_id, cluster_info, cluster_name,
> desired_cluster_state, desired_stack_version, provisioning_state,
> resource_id FROM clusters
>
> Query: ReadAllQuery(name="allClusters" referenceClass=ClusterEntity
> sql="SELECT cluster_id, cluster_info, cluster_name, desired_cluster_state,
> desired_stack_version, provisioning_stat
>
> e, resource_id FROM clusters")
>
> [EL Warning]: 2014-12-08 12:48:08.875--UnitOfWork(1936897728)--Exception
> [EclipseLink-4002] (Eclipse Persistence Services - 2.4.0.v20120608-r11652):
> org.eclipse.persistence.exceptions
>
> .DatabaseException
>
> Internal Exception: org.postgresql.util.PSQLException: ERROR: column
> "resource_id" does not exist
>
>   位置:114
>
> Error Code: 0
>
> Call: SELECT cluster_id, cluster_info, cluster_name,
> desired_cluster_state, desired_stack_version, provisioning_state,
> resource_id FROM clusters
>
> Query: ReadAllQuery(name="allClusters" referenceClass=ClusterEntity
> sql="SELECT cluster_id, cluster_info, cluster_name, desired_cluster_state,
> desired_stack_version, provisioning_stat
>
> e, resource_id FROM clusters")
>
> [EL Warning]: 2014-12-08 12:48:08.877--UnitOfWork(1770101687)--Exception
> [EclipseLink-4002] (Eclipse Persistence Services - 2.4.0.v20120608-r11652):
> org.eclipse.persistence.exceptions
>
> .DatabaseException
>
> Internal Exception: org.postgresql.util.PSQLException: ERROR: column
> "resource_id" does not exist
>
>   位置:114
>
> Error Code: 0
>
> Call: SELECT cluster_id, cluster_info, cluster_name,
> desired_cluster_state, desired_stack_version, provisioning_state,
> resource_id FROM clusters
>
> Query: ReadAllQuery(name="allClusters" referenceClass=ClusterEntity
> sql="SELECT cluster_id, cluster_info, cluster_name, desired_cluster_state,
> desired_stack_version, provisioning_stat
>
> e, resource_id FROM clusters")
>
> [EL Warning]: 2014-12-08 12:48:08.879--UnitOfWork(1124437166)--Exception
> [EclipseLink-4002] (Eclipse Persistence Services - 2.4.0.v20120608-r11652):
> org.eclipse.persistence.e

Re: unofficial python client

2014-12-03 Thread Yusaku Sako
Thanks Greg.

> I've had some discussion with Subin about making this new client the official 
> one, but he had some reservations about contractual obligations requiring it 
> be bundled with the server (is that true?  That makes no sense to me).

Subin, can you clarify what you meant?

Yusaku

On Wed, Dec 3, 2014 at 8:40 AM, Greg Hill  wrote:
> I wrote a new Python client and published it to Github.  Thought others
> might be interested.
>
> https://github.com/jimbobhickville/python-ambariclient
>
> I did first attempt to work on the official client, as I'm much more in
> favor of contributing over forking, but I didn't feel like the effort was
> well spent.  It needed a rewrite to a better foundation, and to do so
> required breaking backwards compatibility.
>
> I've had some discussion with Subin about making this new client the
> official one, but he had some reservations about contractual obligations
> requiring it be bundled with the server (is that true?  That makes no sense
> to me).  I'd rather work with the community on it than go solo, so hopefully
> we can resolve things to mutual satisfaction.
>
> In the meantime, if anyone else is interested in contributing to this
> client, please fork and submit a pull-request.  Or just try it out and
> submit bugs via Github.  I'd like to do everything in the open, so if
> there's sufficient interest, we could set up an open discussion to work
> together on improving it.
>
> Greg

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Access Ambari API through Service Install Script

2014-11-25 Thread Yusaku Sako
Can someone help?

Yusaku

On Sat, Nov 22, 2014 at 4:48 PM, Brian de la Motte 
wrote:

> Hi everyone,
>
> I'm trying to integrate a custom service into Ambari. The problem is the
> service's master during installation needs to know about the hosts that are
> to be the service's slaves. The only way I was able to find that info was
> by using the API.
>
> Is it a good idea to have the master's install script call the API to get
> that info or is there an easier way? I think it would work but the script
> would need to know the username and password for Ambari's API call.
>
> Is there a way to get the service's slave's hosts without the API or for a
> Python script to get the admin username and password from a config in order
> to call the API?
>
> The call I would possibly use is something like this:
>
> curl  --user admin:admin
> http://127.0.0.1:8080/api/v1/clusters/abc/services/CUST_SERVICE/components/CUST_SERVICE_SLAVE?fields=host_components/HostRoles/host_name
>
>
> I could parse out the hostnames from this but how do I make this call
> without knowing what username and password to use.
>
> Any ideas or alternative methods?
>
> Thank you!
>
> Brian
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: configure additional contacts in Nagios

2014-11-25 Thread Yusaku Sako
This isn't directly supported via the Ambari Web UI, but you can modify the
following template on the Ambari Server host:
/var/lib/ambari-server/resources/stacks/HDP/2.0.6/services/NAGIOS
/package/templates/contacts.cfg.j2

1) Add customized contact
It should look something like:

define contact {
contact_namemycontact;
Short name of user
use generic-contact
; Inherit default values from generic-contact template (defined
above)
alias   Nagios Admin
 ; Full name of user
email  myem...@gmail.com  ; <--- CHANGE THIS TO YOUR
EMAIL ADDRESS
}

define contactgroup {
contactgroup_name   admins
alias   Nagios Administrators
members nagiosadmin,sys_logger,mycontact  <-- ADD
NEWLY DEFINED CONTACT HERE
}

2) Restart ambari-server and restart Nagios from Ambari Web
3) Verify that /etc/nagios/objects/contacts.cfg on the Nagios Server host
has the updated info.

http://nagios.sourceforge.net/docs/3_0/objectdefinitions.html for more info
on configuring contacts.

BTW, upcoming Ambari 2.0 will introduce a flexible alerting system,
including multiple notification targets and a fine-grained control of who
receives what type of alerts for which severities, etc (this is partially
working in trunk, though a lot of it is still WIP).

Thanks,
Yusaku


On Thu, Nov 20, 2014 at 7:50 AM, Artem Ervits  wrote:

>  Hello,
>
>
>
> I’m trying to add additional contacts to Nagios and I can’t seem to find a
> way to do so. Is that supported?
>
>
>
> Artem Ervits
>
> New York Presbyterian
>
>
>
> This electronic message is intended to be for the use only of the named
> recipient, and may contain information that is confidential or privileged.
> If you are not the intended recipient, you are hereby notified that any
> disclosure, copying, distribution or use of the contents of this message is
> strictly prohibited. If you have received this message in error or are not
> the named recipient, please notify us immediately by contacting the sender
> at the electronic mail address noted above, and delete and destroy all
> copies of this message. Thank you.
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: 1.7.0

2014-11-25 Thread Yusaku Sako
Yes, there's a Ubuntu 12 repo available.
Please follow the Ambari Quick Start Guide [1] to try out Ambari Server on
Ubuntu 12.

[1] https://cwiki.apache.org/confluence/display/AMBARI/Quick+Start+Guide

Thanks,
Yusaku

On Tue, Nov 25, 2014 at 3:28 AM, Dessie K  wrote:

> Ok thanks,
>
> So is there a binary for Ubuntu 12 available or coming soon?
>
> I can probably just install Ubuntu 12 but I guess there is a good chance
> it will work in 14 also? (Although not supported)
>
> On Mon, Nov 24, 2014 at 6:30 PM, Yusaku Sako 
> wrote:
>
>> Hi,
>>
>> Ambari 1.7.0 adds support specifically for Ubuntu 12 only and not for
>> Ubuntu 14 yet.
>>
>> Yusaku
>>
>> On Mon, Nov 24, 2014 at 5:48 AM, Dessie K 
>> wrote:
>>
>>> Hi,
>>>
>>> I believe from the roadMap 1.7.0 is available this month and contains
>>> Ubuntu support.
>>>
>>> Is there a pre-release available with binary for Ubuntu 14.04?
>>>
>>
>>
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: 1.7.0

2014-11-24 Thread Yusaku Sako
Hi,

Ambari 1.7.0 adds support specifically for Ubuntu 12 only and not for
Ubuntu 14 yet.

Yusaku

On Mon, Nov 24, 2014 at 5:48 AM, Dessie K  wrote:

> Hi,
>
> I believe from the roadMap 1.7.0 is available this month and contains
> Ubuntu support.
>
> Is there a pre-release available with binary for Ubuntu 14.04?
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Capacity scheduler configuration using Ambari 1.6.1

2014-11-19 Thread Yusaku Sako
Sounds like your dashboard preference information is corrupt.
Is this when you are logged in as the "admin" user?
If so, can you try the following call from the Ambari Server host? (modify
the admin credentials accordingly)

curl -i -u admin:admin -H 'X-Requested-By: ambari' -X POST -d
'{"user-pref-admin-dashboard":"{\"dashboardVersion\":\"new\",\"visible\":[\"2\",\"4\",\"17\",\"11\",\"12\",\"13\",\"14\",\"1\",\"5\",\"3\",\"15\",\"20\",\"19\",\"21\",\"23\",\"24\",\"25\",\"26\",\"27\",\"28\"],\"hidden\":[[\"22\",\"Region
In
Transition\"]],\"threshold\":{\"1\":[\"80\",\"90\"],\"2\":[85,95],\"3\":[90,95],\"4\":[80,90],\"5\":[1000,3000],\"6\":[70,90],\"7\":[90,95],\"8\":[50,75],\"9\":[3,12],\"10\":[],\"11\":[],\"12\":[],\"13\":[],\"14\":[],\"15\":[],\"16\":[],\"17\":[],\"18\":[],\"19\":[],\"20\":[70,90],\"21\":[10,19.2],\"22\":[3,10],\"23\":[],\"24\":[70,90],\"25\":[],\"26\":[50,75],\"27\":[50,75],\"28\":[85,95],\"29\":[85,95]}}"}'
http://localhost:8080/api/v1/persist
<http://localhost:8080/api/v1/persist/user-pref-admin-dashboard>

I hope this helps,

Yusaku



On Tue, Nov 18, 2014 at 8:07 AM, Artem Ervits  wrote:

>  Yusaku,
>
>
>
> I get the following error, if image doesn’t appear, the error is
> “TypeError: threshold[id] is undefined
>
>
>
> -Original Message-
> From: Yusaku Sako [mailto:yus...@hortonworks.com]
> Sent: Saturday, November 15, 2014 7:52 PM
> To: user@ambari.apache.org
> Subject: Re: Capacity scheduler configuration using Ambari 1.6.1
>
>
>
> Hi Artem,
>
>
>
> > The other issue I have is my dashboard disappeared, basically the page
> is blank. Any hints on that?
>
>
>
> That's very strange.
>
> Can you open up the JavaScript console in the browser to see if you are
> hitting any errors?
>
>
>
> Yusaku
>
>
>
> On Thu, Nov 13, 2014 at 6:05 PM, Artem Ervits  wrote:
>
> > Hello all,
>
> >
>
> > I’m trying to configure capacity scheduler using 1.6.1 and I’m having
>
> > difficulties. The problem is that it requires to create a new
>
> > configuration group and after doing so, I enter my queues and it never
>
> > switches to the new config group. I check the RM UI and I don’t see
>
> > the queues there either, I do however see the new settings in the
>
> > capacity-scheduler.xml. I see that
>
> > 1.7.0 has refreshing queues as part of the release, is that just an
>
> > ability to refresh a queue through Ambari UI or a whole rebuild of the
>
> > functionality? I also saw in HDP2.2 tutorial that I no longer need to
>
> > create configuration group, would you suggest to wait for 1.7 or it is
>
> > absolutely doable in 1.6.1?
>
> >
>
> > The other issue I have is my dashboard disappeared, basically the page
>
> > is blank. Any hints on that?
>
> >
>
> > Thanks
>
> >
>
> > This electronic message is intended to be for the use only of the
>
> > named recipient, and may contain information that is confidential or
> privileged.
>
> > If you are not the intended recipient, you are hereby notified that
>
> > any disclosure, copying, distribution or use of the contents of this
>
> > message is strictly prohibited. If you have received this message in
>
> > error or are not the named recipient, please notify us immediately by
>
> > contacting the sender at the electronic mail address noted above, and
>
> > delete and destroy all copies of this message. Thank you.
>
>
>
> --
>
> CONFIDENTIALITY NOTICE
>
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.
&g

Re: Accumulo Custom Stack

2014-11-15 Thread Yusaku Sako
Roshan,

Taking Storm as an example:
* The list of Zookeeper Servers is populated via Ambari Web UI here [1].
* If you are deploying via Blueprint, you would have to modify
BlueprintConfigurationProcessor.java [2].

[1] 
https://github.com/apache/ambari/blob/branch-1.7.0/ambari-web/app/models/service_config.js#L471
[2] 
https://github.com/apache/ambari/blob/branch-1.7.0/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/BlueprintConfigurationProcessor.java#L1031

I hope this helps.

Yusaku

On Fri, Nov 14, 2014 at 12:54 PM, Roshan Punnoose  wrote:
> Yeah I was able to get that to work with the latest ambari 1.7.0 nightly.
> However, I was hoping that there was a way to pull the zookeepers value from
> somewhere. I know that this is working in Storm currently, but can't figure
> out how.
>
> On Tue, Nov 11, 2014 at 6:35 PM, Yusaku Sako  wrote:
>>
>> Just FYI...
>> If you are interested in running/managing Accumulo via Ambari, the
>> upcoming 1.7.0 release (should finalize in a week or so) will allow you to
>> easily do that via Slider - it will run Accumulo in YARN containers.
>>
>> Yusaku
>>
>> On Tue, Nov 11, 2014 at 1:12 PM, Roshan Punnoose 
>> wrote:
>>>
>>> Hey all,
>>>
>>> I am trying to modify the Accumulo Custom Stack for Ambari I found on
>>> JIRA: https://issues.apache.org/jira/browse/AMBARI-5265. Seems like there
>>> are only a few modifications to make. However, I can't seem to figure out
>>> how to populate the zookeeper property from Ambari. Is it possible to get
>>> Ambari to fill in the zookeeper list somehow?
>>>
>>> Roshan
>>
>>
>>
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader of
>> this message is not the intended recipient, you are hereby notified that any
>> printing, copying, dissemination, distribution, disclosure or forwarding of
>> this communication is strictly prohibited. If you have received this
>> communication in error, please contact the sender immediately and delete it
>> from your system. Thank You.
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Capacity scheduler configuration using Ambari 1.6.1

2014-11-15 Thread Yusaku Sako
Hi Artem,

> The other issue I have is my dashboard disappeared, basically the page is 
> blank. Any hints on that?

That's very strange.
Can you open up the JavaScript console in the browser to see if you
are hitting any errors?

Yusaku

On Thu, Nov 13, 2014 at 6:05 PM, Artem Ervits  wrote:
> Hello all,
>
> I’m trying to configure capacity scheduler using 1.6.1 and I’m having
> difficulties. The problem is that it requires to create a new configuration
> group and after doing so, I enter my queues and it never switches to the new
> config group. I check the RM UI and I don’t see the queues there either, I
> do however see the new settings in the capacity-scheduler.xml. I see that
> 1.7.0 has refreshing queues as part of the release, is that just an ability
> to refresh a queue through Ambari UI or a whole rebuild of the
> functionality? I also saw in HDP2.2 tutorial that I no longer need to create
> configuration group, would you suggest to wait for 1.7 or it is absolutely
> doable in 1.6.1?
>
> The other issue I have is my dashboard disappeared, basically the page is
> blank. Any hints on that?
>
> Thanks
>
> This electronic message is intended to be for the use only of the named
> recipient, and may contain information that is confidential or privileged.
> If you are not the intended recipient, you are hereby notified that any
> disclosure, copying, distribution or use of the contents of this message is
> strictly prohibited. If you have received this message in error or are not
> the named recipient, please notify us immediately by contacting the sender
> at the electronic mail address noted above, and delete and destroy all
> copies of this message. Thank you.

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Changing java in 1.2.4

2014-11-11 Thread Yusaku Sako
I believe Ambari 1.5.0 was when JDK 7 was made the default option.
1.2.4 is pretty old and I'm pretty sure it was never tested with JDK 7,
though it could work (but no guarantees).
You can try setting a custom JDK path via "ambari-server -j" option and see.
I highly suggest you test this in a staging environment first.

Yusaku

On Thu, Oct 9, 2014 at 4:28 PM, Anisha Agarwal 
wrote:

>  Hi,
>
>  I am using ambari 1.2.4.
>
>  I am trying to upgrade the version of java. I have some questions
> regarding that.
>
>
>1. Where all do I need to make changes to upgrade the version before
>installation?
>2. What all changes are needed post installation?
>3. Does ambari 1.2.4 support higher version of jdk than jdk-1.6.0_31?
>
> Thanks,
> Anisha
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: how to install a specific version of HDP using Ambari

2014-11-11 Thread Yusaku Sako
Right, but did you specify the base URLs for HDP 2.1.3 in Select Stack page
of the Install Wizard?
If you've done that and you still encountered a failure during "Install,
Start and Test" page, you should do what Jeff suggested:

> 1) Can you confirm what is in /etc/yum.repos.d/HDP.repo? This file is
generated by Ambari based on the Base URLs you enter and should reflect the
? > 2.1.3 urls that you entered during the wizard.
> 2) "yum clean all"
> 3) "yum info hadoop" and see what version it returns.

Yusaku

On Wed, Nov 5, 2014 at 5:29 PM, guxiaobo1982  wrote:

> this is not a HDP.repo file, but ambari.repo file, I installed ambari
> using the binaries got from apache site.
>
>
> [root@ambari yum.repos.d]# more ambari.repo
>
> [ambari-1.x]
>
> name=Ambari 1.x
>
> baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/GA
>
> gpgcheck=1
>
> gpgkey=
> http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-K
>
> EY-Jenkins
>
> enabled=1
>
> priority=1
>
>
> [Updates-ambari-1.6.1]
>
> name=ambari-1.6.1 - Updates
>
> baseurl=
> http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.6.1
>
> gpgcheck=1
>
> gpgkey=
> http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-K
>
> EY-Jenkins
>
> enabled=1
>
> priority=1
>
> [root@ambari yum.repos.d]#
>
>
> -- Original --
> *From: * "Jeff Sposetti";;
> *Send time:* Wednesday, Nov 5, 2014 9:18 PM
> *To:* "user@ambari.apache.org";
> *Subject: * Re: how to install a specific version of HDP using Ambari
>
> I looks like it's still trying to grab the 2.1.7 RPMs.
>
> 1) Can you confirm what is in /etc/yum.repos.d/HDP.repo ? This file is
> generated by Ambari based on the Base URLs you enter and should reflect the
> 2.1.3 urls that you entered during the wizard.
> 2) "yum clean all"
> 3) "yum info hadoop" and see what version it returns.
>
>
> On Wed, Nov 5, 2014 at 4:16 AM, guxiaobo1982  wrote:
>
>> I tried
>> http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.1.3.0 and
>> http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.1.5.0 for
>> CENTOS6 both,
>>
>> can with error like this
>>
>>
>> stderr:   /var/lib/ambari-agent/data/errors-302.txt
>>
>> 2014-11-05 17:12:21,987 - Error while executing command 'install':
>> Traceback (most recent call last):
>>   File 
>> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>>  line 111, in execute
>> method(env)
>>   File 
>> "/var/lib/ambari-agent/cache/stacks/HDP/2.1/services/FALCON/package/scripts/falcon_client.py",
>>  line 25, in install
>> self.install_packages(env)
>>   File 
>> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>>  line 167, in install_packages
>> Package(name)
>>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
>> line 148, in __init__
>> self.env.run()
>>   File 
>> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
>> line 149, in run
>> self.run_action(resource, action)
>>   File 
>> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
>> line 115, in run_action
>> provider_action()
>>   File 
>> "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py",
>>  line 40, in action_install
>> self.install_package(package_name)
>>   File 
>> "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py",
>>  line 36, in install_package
>> shell.checked_call(cmd)
>>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
>> line 35, in checked_call
>> return _call(command, logoutput, True, cwd, env, preexec_fn, user, 
>> wait_for_finish, timeout)
>>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
>> line 90, in _call
>> raise Fail(err_msg)
>> Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install falcon' returned 1. 
>> Error Downloading Packages:
>>   hadoop-yarn-2.4.0.2.1.7.0-784.el6.x86_64: failure: 
>> hadoop/hadoop-yarn-2.4.0.2.1.7.0-784.el6.x86_64.rpm from HDP-2.1: [Errno 
>> 256] No more mirrors to try.
>>   bigtop-jsvc-1.0.10-1.el6.x86_64: failure: 
>> bigtop-jsvc/bigtop-jsvc-1.0.10-1.el6.x86_64.rpm from HDP-2.1: [Errno 256] No 
>> more mirrors to try.
>>   hadoop-2.4.0.2.1.7.0-784.el6.x86_64: failure: 
>> hadoop/hadoop-2.4.0.2.1.7.0-784.el6.x86_64.rpm from HDP-2.1: [Errno 256] No 
>> more mirrors to try.
>>   hadoop-client-2.4.0.2.1.7.0-784.el6.x86_64: failure: 
>> hadoop/hadoop-client-2.4.0.2.1.7.0-784.el6.x86_64.rpm from HDP-2.1: [Errno 
>> 256] No more mirrors to try.
>>   hadoop-hdfs-2.4.0.2.1.7.0-784.el6.x86_64: failure: 
>> hadoop/hadoop-hdfs-2.4.0.2.1.7.0-784.el6.x86_64.rpm from HDP-2.1: [Errno 
>> 256] No more mirrors to try.
>>   hadoop-mapreduce-2.4.0.2.1.7.0-784.el6.x86_64: failure: 
>> hadoop/hadoop-mapreduce-2.4.0.2.1.7.0-784.el6.x86_64.rpm from HDP-2.1: 
>> [Errno 256] No more mirrors to try.
>>   z

Re: Accumulo Custom Stack

2014-11-11 Thread Yusaku Sako
Just FYI...
If you are interested in running/managing Accumulo via Ambari, the upcoming
1.7.0 release (should finalize in a week or so) will allow you to easily do
that via Slider - it will run Accumulo in YARN containers.

Yusaku

On Tue, Nov 11, 2014 at 1:12 PM, Roshan Punnoose  wrote:

> Hey all,
>
> I am trying to modify the Accumulo Custom Stack for Ambari I found on
> JIRA: https://issues.apache.org/jira/browse/AMBARI-5265. Seems like there
> are only a few modifications to make. However, I can't seem to figure out
> how to populate the zookeeper property from Ambari. Is it possible to get
> Ambari to fill in the zookeeper list somehow?
>
> Roshan
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Building rpms with HDP source

2014-11-11 Thread Yusaku Sako
Hi Waldyn,

Ambari itself does not make any references to or have any knowledge about
HDP_COMPONENT_VARIABLES.sh.
I'm trying to understand what you did.
You've made changes to some Hadoop source, created your own Hadoop (or
ecosystem component) RPMs, and you are trying to deploy it via Ambari and
getting the "HDP_COMPONENT_VARIABLES.sh cannot be found" error?

Yusaku



On Tue, Nov 11, 2014 at 10:33 AM, Benbenek, Waldyn J <
waldyn.benbe...@unisys.com> wrote:

> I am sure that this has been covered before, but I cannot find an answer
> to it.  I am trying to build the Hadoop rpms for the repository.  I have a
> change I need to put in for our cluster arrangement.  I installed the rpm
> setup from the *src.rpm.  When I run it I get a message that the file
> HDP_COMPONENT_VARIABLES.sh cannot be found.  I looked for it in the
> repository tar balls but could not find it there either.
>
>
>
> I need to build for SLES.
>
>
>
> Thanks,
>
>
>
> Wally Benbenek
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Stop all components API call no longer seems to work

2014-11-07 Thread Yusaku Sako
Hi Greg,

The API call you mentioned to stop all components on a host still
works in 1.7.0 (I just verified on my recent 1.7.0 cluster).
Operation_level is not mandatory and the WARN can be ignored.
Operation_level drives the behavior of operations when
services/hosts/host_components are in maintenance mode.
Unfortunately I don't see any documentation on this.
I presume you are getting 200 because all components on the specified
host are already stopped.

Yusaku

On Fri, Nov 7, 2014 at 5:55 AM, Greg Hill  wrote:
> This used to work in earlier 1.7.0 builds, but doesn't seem to any longer:
>
> PUT
> /api/v1/clusters/testcluster/hosts/c6404.ambari.apache.org/host_components
> {"RequestInfo": {"context": "Stop All Components"}, "Body": {"HostRoles":
> {"state": "INSTALLED"}}}
>
> Seeing this in the server logs:
> 13:05:42,082  WARN [qtp1842914725-24] AmbariManagementControllerImpl:2149 -
> Can not determine request operation level. Operation level property should
> be specified for this request.
> 13:05:42,082  INFO [qtp1842914725-24] AmbariManagementControllerImpl:2162 -
> Received a updateHostComponent request, clusterName=testcluster,
> serviceName=HDFS, componentName=DATANODE, hostname=c6404.ambari.apache.org,
> request={ clusterName=testcluster, serviceName=HDFS, componentName=DATANODE,
> hostname=c6404.ambari.apache.org, desiredState=INSTALLED,
> desiredStackId=null, staleConfig=null, adminState=null}
> 13:05:42,083  INFO [qtp1842914725-24] AmbariManagementControllerImpl:2162 -
> Received a updateHostComponent request, clusterName=testcluster,
> serviceName=GANGLIA, componentName=GANGLIA_MONITOR,
> hostname=c6404.ambari.apache.org, request={ clusterName=testcluster,
> serviceName=GANGLIA, componentName=GANGLIA_MONITOR,
> hostname=c6404.ambari.apache.org, desiredState=INSTALLED,
> desiredStackId=null, staleConfig=null, adminState=null}
> 13:05:42,083  INFO [qtp1842914725-24] AmbariManagementControllerImpl:2162 -
> Received a updateHostComponent request, clusterName=testcluster,
> serviceName=YARN, componentName=NODEMANAGER,
> hostname=c6404.ambari.apache.org, request={ clusterName=testcluster,
> serviceName=YARN, componentName=NODEMANAGER,
> hostname=c6404.ambari.apache.org, desiredState=INSTALLED,
> desiredStackId=null, staleConfig=null, adminState=null}
>
> But I get an empty response with status 200 and no request was created.
> Shouldn't that be an error if it can't act on my request?
>
> Are there some docs about how to formulate the 'operation level' part of the
> request?
>
> Greg
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: HDP 2.1.7 can't start hive metastore service

2014-11-04 Thread Yusaku Sako
Hi,

I've tried installing HDP 2.1.7 using Ambari 1.6.1 on CentOS 6.4 today and
I did not run into the Hive issue you mentioned.
I selected "New MySQL Database" for Hive.
You mentioned that it's a single-node cluster.

1. If you run "export HIVE_CONF_DIR=/etc/hive/conf.server ;
/usr/lib/hive/bin/schematool -initSchema -dbType mysql -userName hive
-passWord " from the command line, does that work?
2. schematool is trying to connect via jdbc:mysql://
lix1.bh.com/hive?createDatabaseIfNotExist=true  Does lix1.bh.com
 resolve properly?

Yusaku

On Tue, Nov 4, 2014 at 1:00 AM, guxiaobo1982  wrote:

>
> Hi,
>
> I use ambari 1.6.1 to install a single node cluster, I can see ambari
> installed the lasted version 2.1.7 of HDP, but the hive service failed to
> start with the following messages:
>
>
> stderr:   /var/lib/ambari-agent/data/errors-56.txt
>
> 2014-11-04 16:46:08,931 - Error while executing command 'start':
> Traceback (most recent call last):
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 111, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_metastore.py",
>  line 42, in start
> self.configure(env) # FOR SECURITY
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive_metastore.py",
>  line 37, in configure
> hive(name='metastore')
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/HIVE/package/scripts/hive.py",
>  line 108, in hive
> not_if = check_schema_created_cmd
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 148, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 149, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 115, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/system.py",
>  line 239, in action_run
> raise ex
> Fail: Execution of 'export HIVE_CONF_DIR=/etc/hive/conf.server ; 
> /usr/lib/hive/bin/schematool -initSchema -dbType mysql -userName hive 
> -passWord [PROTECTED]' returned 1. Metastore connection URL:
> jdbc:mysql://lix1.bh.com/hive?createDatabaseIfNotExist=true
> Metastore Connection Driver :  com.mysql.jdbc.Driver
> Metastore connection User: hive
> org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema 
> version.
> *** schemaTool failed ***
>
> stdout:   /var/lib/ambari-agent/data/output-56.txt
>
> 2014-11-04 16:45:55,983 - Execute['mkdir -p /tmp/HDP-artifacts/; curl -kf 
> -x "" --retry 10 
> http://ambari.bh.com:8080/resources//UnlimitedJCEPolicyJDK7.zip -o 
> /tmp/HDP-artifacts//UnlimitedJCEPolicyJDK7.zip'] {'environment': ..., 
> 'not_if': 'test -e /tmp/HDP-artifacts//UnlimitedJCEPolicyJDK7.zip', 
> 'ignore_failures': True, 'path': ['/bin', '/usr/bin/']}
> 2014-11-04 16:45:56,001 - Skipping Execute['mkdir -p /tmp/HDP-artifacts/; 
> curl -kf -x "" --retry 10 
> http://ambari.bh.com:8080/resources//UnlimitedJCEPolicyJDK7.zip -o 
> /tmp/HDP-artifacts//UnlimitedJCEPolicyJDK7.zip'] due to not_if
> 2014-11-04 16:45:56,115 - Directory['/etc/hadoop/conf.empty'] {'owner': 
> 'root', 'group': 'root', 'recursive': True}
> 2014-11-04 16:45:56,116 - Link['/etc/hadoop/conf'] {'not_if': 'ls 
> /etc/hadoop/conf', 'to': '/etc/hadoop/conf.empty'}
> 2014-11-04 16:45:56,137 - Skipping Link['/etc/hadoop/conf'] due to not_if
> 2014-11-04 16:45:56,152 - File['/etc/hadoop/conf/hadoop-env.sh'] {'content': 
> Template('hadoop-env.sh.j2'), 'owner': 'hdfs'}
> 2014-11-04 16:45:56,153 - XmlConfig['core-site.xml'] {'owner': 'hdfs', 
> 'group': 'hadoop', 'conf_dir': '/etc/hadoop/conf', 'configurations': ...}
> 2014-11-04 16:45:56,159 - Generating config: /etc/hadoop/conf/core-site.xml
> 2014-11-04 16:45:56,160 - File['/etc/hadoop/conf/core-site.xml'] {'owner': 
> 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None}
> 2014-11-04 16:45:56,160 - Writing File['/etc/hadoop/conf/core-site.xml'] 
> because contents don't match
> 2014-11-04 16:45:56,177 - Execute['/bin/echo 0 > /selinux/enforce'] 
> {'only_if': 'test -f /selinux/enforce'}
> 2014-11-04 16:45:56,216 - Execute['mkdir -p 
> /usr/lib/hadoop/lib/native/Linux-i386-32; ln -sf /usr/lib/libsnappy.so 
> /usr/lib/hadoop/lib/native/Linux-i386-32/libsnappy.so'] {}
> 2014-11-04 16:45:56,241 - Execute['mkdir -p 
> /usr/lib/hadoop/lib/native/Linux-amd64-64; ln -sf /usr/lib64/libsnappy.so 
> /usr/lib/hadoop/lib/native/Linux-amd64-64/libsnappy.so'] {}
> 2014-11-04 16:45:56,262 - Directory['/var/log/hadoop'] {'owner': 'root', 
> 'group': 'root', 'recursive': True}
> 2014-11-04 16:45:56,263 - Directory['/var/run/hadoop'] {'owner': 'root', 
> 'group': 'root', 'recursive': True}
> 2014

Re: possible bug in the Ambari API

2014-11-03 Thread Yusaku Sako
Hi Greg,

The yum repo you referred to is old and no longer maintained (I just
installed ambari-server off of it and I see the hash is
trunk:0e959b0ed80fc1a170cc10b1c75050c88a7b2d06.
This is trunk code from Oct 4.
Please use the URLs shown in the Quick Start Guide:
https://cwiki.apache.org/confluence/display/AMBARI/Quick+Start+Guide

# to test the 1.7.0 branch build - updated nightly
wget -O /etc/yum.repos.d/ambari.repo
http://s3.amazonaws.com/dev.hortonworks.com/ambari/centos6/1.x/latest/1.7.0/ambari.repo
OR
#  to test the trunk build - updated multiple times a day
wget -O /etc/yum.repos.d/ambari.repo
http://s3.amazonaws.com/dev.hortonworks.com/ambari/centos6/1.x/latest/trunk/ambari.repo

Thanks,
Yusaku


On Mon, Nov 3, 2014 at 12:42 PM, Greg Hill  wrote:

>  /api/v1/stacks/HDP/versions/2.1/services/HBASE/configurations works
> fine, just like any other GET method on a list of resources.
>
>  I did a yum update and ambari-server restart on my ambari node to rule
> that out.  Still get the same issue.  Happens
> for /api/v1/stacks/HDP/versions/2.1/services/HDFS/configurations/content as
> well.
>
>  This is my yum repo:
>
> http://s3.amazonaws.com/dev.hortonworks.com/ambari/centos6/1.x/updates/1.7.0.trunk
> /
>
>  Am I missing some header that fixes things?  All I'm passing in in
> X-Requested-By.
>
>  Why does a GET on a single resource return two resources anyway?  That
> seems like it should be subdivided further if that's how it works.
>
>  Greg
>
>   From: Srimanth Gunturi 
> Reply-To: "user@ambari.apache.org" 
> Date: Monday, November 3, 2014 3:17 PM
>
> To: "user@ambari.apache.org" 
> Subject: Re: possible bug in the Ambari API
>
>   Hi Greg,
> I attempted the same API on latest 1.7.0 build, and do not see the issue
> (the comma is present between the two configurations).
> Do you see the same when you access
> "/api/v1/stacks2/HDP/versions/2.1/stackServices/HBASE/configurations" or
>  "/api/v1/stacks2/HDP/versions/2.1/stackServices/HDFS/configurations/content"
> ?
> Regards,
> Srimanth
>
>
>
> On Mon, Nov 3, 2014 at 12:12 PM, Greg Hill 
> wrote:
>
>>  Also, I still get the same broken response using 'stacks' instead of
>> 'stacks2'.  Is this a bug that was fixed recently?  I'm using a build from
>> last week.
>>
>>  Greg
>>
>>   From: Greg 
>> Reply-To: "user@ambari.apache.org" 
>> Date: Monday, November 3, 2014 3:05 PM
>>
>> To: "user@ambari.apache.org" 
>> Subject: Re: possible bug in the Ambari API
>>
>>   Oh?  I was basing it off the python client using 'stacks2'.  I figured
>> that stacks was deprecated, but I suppose I should have asked.  Neither API
>> is documented.  Why are there two?
>>
>>  Greg
>>
>>   From: Jeff Sposetti 
>> Reply-To: "user@ambari.apache.org" 
>> Date: Monday, November 3, 2014 2:54 PM
>> To: "user@ambari.apache.org" 
>> Subject: Re: possible bug in the Ambari API
>>
>>Greg, That's the /stacks2 API. Want to try with /stacks (which I
>> think is the preferred API resource)?
>>
>>
>> http://c6401.ambari.apache.org:8080/api/v1/stacks/HDP/versions/2.1/services/HBASE/configurations/content
>>
>>
>> [
>>   {
>> "href" : 
>> "http://c6401.ambari.apache.org:8080/api/v1/stacks/HDP/versions/2.1/services/HBASE/configurations/content";,
>> "StackConfigurations" : {
>>   "final" : "false",
>>   "property_description" : "Custom log4j.properties",
>>   "property_name" : "content",
>>   "property_type" : [ ],
>>   "property_value" : "\n# Licensed to the Apache Software Foundation 
>> (ASF) under one\n# or more contributor license agreements.  See the NOTICE 
>> file\n# distributed with this work for additional information\n# regarding 
>> copyright ownership.  The ASF licenses this file\n# to you under the Apache 
>> License, Version 2.0 (the\n# \"License\"); you may not use this file except 
>> in compliance\n# with the License.  You may obtain a copy of the License 
>> at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless 
>> required by applicable law or agreed to in writing, software\n# distributed 
>> under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT 
>> WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the 
>> License for the specific language governing permissions and\n# limitations 
>> under the License.\n\n\n# Define some default values that can be overridden 
>> by system 
>> properties\nhbase.root.logger=INFO,console\nhbase.security.logger=INFO,console\nhbase.log.dir=.\nhbase.log.file=hbase.log\n\n#
>>  Define the root logger to the system property 
>> \"hbase.root.logger\".\nlog4j.rootLogger=${hbase.root.logger}\n\n# Logging 
>> Threshold\nlog4j.threshold=ALL\n\n#\n# Daily Rolling File 
>> Appender\n#\nlog4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.DRFA.File=${hbase.log.dir}/${hbase.log.file}\n\n#
>>  Rollver at midnight\nlog4j.appender.DRFA.DatePattern=.-MM-dd\n\n# 
>> 30-day 
>> backup\n#log4j.appender.DRFA.MaxBackupIndex=30\nlog4j.appender.DRFA.layout=o

Re: No Cluster Load, Memory, CPU and network information for HDPNamenode in Ganglia

2014-10-20 Thread Yusaku Sako
I presume Load, Memory, CPU, and Network are not showing for any of
HDPNameNode, HDPDataNode, HDPResourceManager, HDP*?
Are you seeing any metrics getting captured, and if so, what are those?
How is the /etc/hosts set up?  With Ganglia, based on my experience, things
don't work well unless /etc/hosts are set up in the following way:
  

Something like:
192.168.64.101 c6401.ambari.apache.org c6401
192.168.64.102 c6402.ambari.apache.org c6402

On Mon, Oct 20, 2014 at 3:05 AM, Mingjiang Shi  wrote:

> Hi There,
> I deployed an HDP-2.1 cluster with Ganglia installed, but I don't see any
> Load, Memory, CPU and network information for HDPNamenode. Is this as
> designed? Thanks!
>
> ​
>
>
> --
> Cheers
> -MJ
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


  1   2   >