Re: Ambari upgrade

2015-09-01 Thread Jeff Sposetti
I checked and those instructions should not be used. The link is going to be 
removed from the 2.1.0.


Follow the guidance of first going to HDP 2.1 -> 2.2, then HDP 2.2 -> 2.3.



From: Loïc Chanel 
Sent: Tuesday, September 01, 2015 11:07 AM
To: user@ambari.apache.org
Subject: Re: Ambari upgrade

As I need to upgrade a cluster from HDP 2.1 to HDP 2.3, which one should I 
trust ?
Thanks,

Loïc

Loïc CHANEL
Engineering student at TELECOM Nancy
Trainee at Worldline - Villeurbanne

2015-09-01 16:02 GMT+02:00 Jeff Sposetti 
mailto:j...@hortonworks.com>>:
Thanks for pointing this out. I don't think that section was meant to be there 
in the 2.1.0 doc set. It's not there in the 2.1.1 doc set.

From: Loïc Chanel 
mailto:loic.cha...@telecomnancy.net>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Tuesday, September 1, 2015 at 3:41 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Re: Ambari upgrade

Here it is : 
http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.0.0/bk_upgrading_Ambari/content/_upgrading_the_hdp_stack_from_21_to_23.html


Loïc CHANEL
Engineering student at TELECOM Nancy
Trainee at Worldline - Villeurbanne

2015-08-31 18:42 GMT+02:00 Jeff Sposetti 
mailto:j...@hortonworks.com>>:
Hi, in the Ambari 2.1.1 Upgrade Guide, where do you see the section that is 
dedicated to HDP 2.1 -> 2.3 upgrade?

http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.1.0/bk_upgrading_Ambari/content/_upgrading_hdp_stack.html


 I don't see a section for it. Can you share the link?

From: Loïc Chanel 
mailto:loic.cha...@telecomnancy.net>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Monday, August 31, 2015 at 11:02 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Ambari upgrade

Hi all,

As I was reading the documentation for upgrading to HDP 2.3 and Ambari 2.1, I 
noticed this sentence :
"Important : Ambari 2.1 does not support directly upgrading from HDP 2.0 or HDP 
2.1 to HDP 2.3. In order to upgrade from HDP 2.0 or HDP 2.1, you must first 
upgrade to HDP 2.2 using either Ambari 1.7 or 2.0. Once completed, upgrade your 
current Ambari to Ambari 2.1. Then, leverage Ambari 2.1 to complete the upgrade 
from HDP 2.2 to HDP 2.3."

But in this exact same documentation, there is a whole paragraph dedicated to 
the upgrade from HDP 2.1 to HDP 2.3.

Can someone please explain what I should understand ?
Because I have to upgrade a HDP 2.1 & Ambari 1.7 to HDP 2.3 & Ambari 2.1, and I 
would like to know the surest way to do it.

Thanks in advance for your help,
Regards,


Loïc

Loïc CHANEL
Engineering student at TELECOM Nancy
Trainee at Worldline - Villeurbanne




Re: Ambari upgrade

2015-09-01 Thread Jeff Sposetti
Thanks for pointing this out. I don't think that section was meant to be there 
in the 2.1.0 doc set. It's not there in the 2.1.1 doc set.

From: Loïc Chanel 
mailto:loic.cha...@telecomnancy.net>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Tuesday, September 1, 2015 at 3:41 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Re: Ambari upgrade

Here it is : 
http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.0.0/bk_upgrading_Ambari/content/_upgrading_the_hdp_stack_from_21_to_23.html


Loïc CHANEL
Engineering student at TELECOM Nancy
Trainee at Worldline - Villeurbanne

2015-08-31 18:42 GMT+02:00 Jeff Sposetti 
mailto:j...@hortonworks.com>>:
Hi, in the Ambari 2.1.1 Upgrade Guide, where do you see the section that is 
dedicated to HDP 2.1 -> 2.3 upgrade?

http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.1.0/bk_upgrading_Ambari/content/_upgrading_hdp_stack.html


 I don't see a section for it. Can you share the link?

From: Loïc Chanel 
mailto:loic.cha...@telecomnancy.net>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Monday, August 31, 2015 at 11:02 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Ambari upgrade

Hi all,

As I was reading the documentation for upgrading to HDP 2.3 and Ambari 2.1, I 
noticed this sentence :
"Important : Ambari 2.1 does not support directly upgrading from HDP 2.0 or HDP 
2.1 to HDP 2.3. In order to upgrade from HDP 2.0 or HDP 2.1, you must first 
upgrade to HDP 2.2 using either Ambari 1.7 or 2.0. Once completed, upgrade your 
current Ambari to Ambari 2.1. Then, leverage Ambari 2.1 to complete the upgrade 
from HDP 2.2 to HDP 2.3."

But in this exact same documentation, there is a whole paragraph dedicated to 
the upgrade from HDP 2.1 to HDP 2.3.

Can someone please explain what I should understand ?
Because I have to upgrade a HDP 2.1 & Ambari 1.7 to HDP 2.3 & Ambari 2.1, and I 
would like to know the surest way to do it.

Thanks in advance for your help,
Regards,


Loïc

Loïc CHANEL
Engineering student at TELECOM Nancy
Trainee at Worldline - Villeurbanne



Re: Ambari upgrade

2015-08-31 Thread Jeff Sposetti
Hi, in the Ambari 2.1.1 Upgrade Guide, where do you see the section that is 
dedicated to HDP 2.1 -> 2.3 upgrade?

http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.1.0/bk_upgrading_Ambari/content/_upgrading_hdp_stack.html


 I don't see a section for it. Can you share the link?

From: Loïc Chanel 
mailto:loic.cha...@telecomnancy.net>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Monday, August 31, 2015 at 11:02 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Ambari upgrade

Hi all,

As I was reading the documentation for upgrading to HDP 2.3 and Ambari 2.1, I 
noticed this sentence :
"Important : Ambari 2.1 does not support directly upgrading from HDP 2.0 or HDP 
2.1 to HDP 2.3. In order to upgrade from HDP 2.0 or HDP 2.1, you must first 
upgrade to HDP 2.2 using either Ambari 1.7 or 2.0. Once completed, upgrade your 
current Ambari to Ambari 2.1. Then, leverage Ambari 2.1 to complete the upgrade 
from HDP 2.2 to HDP 2.3."

But in this exact same documentation, there is a whole paragraph dedicated to 
the upgrade from HDP 2.1 to HDP 2.3.

Can someone please explain what I should understand ?
Because I have to upgrade a HDP 2.1 & Ambari 1.7 to HDP 2.3 & Ambari 2.1, and I 
would like to know the surest way to do it.

Thanks in advance for your help,
Regards,


Loïc

Loïc CHANEL
Engineering student at TELECOM Nancy
Trainee at Worldline - Villeurbanne


Re: where are scripts stored?

2015-08-04 Thread Jeff Sposetti
Hi, are you referring to the Pig View and the pig Scripts? The scripts will be 
stored at the HDFS directory specified as part of your view configuration. I 
believe it defaults to "/user/${username}/pig/scripts" where ${username} will 
be replaced by the current user context accessing the view.

FWIW, you can checkout the Pig view code here:

https://github.com/apache/ambari/tree/trunk/contrib/views/pig

And some additional View info in the Ambari Views guide:

http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.0.0/bk_ambari_views_guide/content/ch_using_pig_view.html


From: Adaryl Wakefield 
mailto:adaryl.wakefi...@hotmail.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Monday, August 3, 2015 at 10:32 PM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: where are scripts stored?

I'm working through some Pig tutorials and I tried to use Ambari to write the 
scripts. Thing is, when I saved the script, I couldn't find where on disk it 
was actually stored. Because of that I had no idea how to give it the path to 
the file I was trying to process. In what directory are scripts stored when 
created with Ambari?

B.


Re: Ambari data corruption/recovery process

2015-06-27 Thread Jeff Sposetti
Hi,

(Others... please add/correct if I missed something).

I believe the keys are unrelated to whether agent is bootstrapped with SSH or 
manual. There will be keys on the agents if the ambari server-agent 
communication was setup for two-way ssl. This is not set by default in Ambari 
Server ambari.properties. If enabled, you have this in the ambari.properties 
file.

security.server.two_way_ssl=true

So if two-way ssl is not enabled, the keys folder is empty on the agent hosts 
(and there is nothing to delete). If enabled, then yep, you have to clear that 
folder so when the agent checks-in with the replacement Ambari Server, the keys 
will get re-created to work with the new Ambari Server.

Cheers,

Jeff

From: Alex Kaplan mailto:akap...@ifwe.co>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Saturday, June 27, 2015 at 3:16 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Re: Ambari data corruption/recovery process


Is removing that directory necessary for agents that registered without ssh?

On Jun 26, 2015 5:53 PM, "Yusaku Sako" 
mailto:yus...@hortonworks.com>> wrote:
Yes, if you are talking about corruption, then you would need snapshots to go 
back to.
Recovery would be simpler if the Ambari Server hostname does not change (IP 
address changes should not matter).

One more step that I forgot to mention...  you would need to delete 
/var/lib/ambari-agent/keys/* from each agent before restarting it.

Yusaku

From: Clark Breyman mailto:cl...@breyman.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Friday, June 26, 2015 5:22 PM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Re: Ambari data corruption/recovery process

Thanks Yusaku for the quick response.

For our production systems, we're planning on using Postgres replication to 
ensure backups, though that doesn't defend against data corruption. Perhaps 
snapshots will be required.
Is there any documentation on restoring to a newly provisioned host? Is there 
any reason to use an DNS A record instead of a CNAME alias to simplify the 
recovery process?


On Fri, Jun 26, 2015 at 5:14 PM, Yusaku Sako 
mailto:yus...@hortonworks.com>> wrote:
Ambari DB should be backed up on a regular basis.  This is the most important 
piece of information.
It is also advisable to also back up 
/etc/ambari-server/conf/ambari-server.properties.
If you have these two, you can restore Ambari Server back to a running 
condition on a different host.
If the hostname of the Ambari Server changes, then you would have to update 
/etc/ambari-agent/conf/ambari-agent.ini to point to the new Ambari Server 
hostname and restart the agent.

Yusaku

From: Clark Breyman mailto:cl...@breyman.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Friday, June 26, 2015 5:10 PM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Ambari data corruption/recovery process

I'm wondering if anyone can share pointers/procedures/best practices to handle 
the scenarios where:

a) The sql database becomes corrupt. (Bugs, ...)
b) The Ambari service host is lost (e.g. EC2 instance termination, physical 
hardware loss, ...)




Re: NFS service via Ambari

2015-05-29 Thread Jeff Sposetti
Hi, You can install + manage the NFS gateway outside of Ambari. These 
instructions are newer...

http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.10/bk_user-guide/content/user-guide-hdfs-nfs.html

For reference, checkout this Epic JIRA about install + manage NFS gateway from 
Ambari...

https://issues.apache.org/jira/browse/AMBARI-9224

From: Joshi Omkar mailto:omkar.jo...@scania.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Friday, May 29, 2015 at 8:22 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: NFS service via Ambari

Hi,

I have a HDP-2.2.4.2-2 9 node cluster running.

There are multiple users who want to upload their files from their 
Windows/Linux desktops onto HDFS either via tools like WinSCP etc. or map the 
HDFS as a network drive.

I think 'HDFS NFS Gateway' is the way to go but I couldn't find a way to do it 
via Ambari(I have done an automated installation of HDP via Ambari).
I came across this Hortonworks doc. 
link
  for starting NFS service but shall I proceed with it these manual changes on 
a node or can I achieve it via Ambari ?

Regards,
Omkar Joshi



Re: Adding Hosts to Existing Cluster | Ambari 1.7.0

2015-05-17 Thread Jeff Sposetti
Hi,

Posted a comment to https://issues.apache.org/jira/browse/AMBARI-8458 that 
includes a brief (simple) example.

Note: this is for Ambari 2.0.0. Apologies if I didn’t highlight that earlier. 
If you sticking with Ambari 1.7 and not upgrading to 2.0, then you will have to 
use the other API methods described. But once you get to Ambari 2.0, this API 
becomes an option.

Hope this helps.

Jeff

From: Pratik Gadiya 
mailto:pratik_gad...@persistent.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Sunday, May 17, 2015 at 10:57 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: RE: Adding Hosts to Existing Cluster | Ambari 1.7.0


I think I will stick to the approach mentioned in 
https://issues.apache.org/jira/browse/AMBARI-8458 . This approach seems to be 
pretty easy to use.


Can someone help me out on the same ?

Please see the below mail conversions with Jeff for detail.

Help much appreciated !!

~Pratik


From: Yusaku Sako [mailto:yus...@hortonworks.com]
Sent: Sunday, May 17, 2015 6:32 PM
To: user@ambari.apache.org<mailto:user@ambari.apache.org>
Subject: Re: Adding Hosts to Existing Cluster | Ambari 1.7.0

I think others can help you with the blueprint-style add host call.
In the meantime, you should also look at 
https://cwiki.apache.org/confluence/display/AMBARI/Bulk+install+components+on+selected+hosts

Thanks,
Yusaku

From: Pratik Gadiya 
mailto:pratik_gad...@persistent.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Sunday, May 17, 2015 4:22 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: RE: Adding Hosts to Existing Cluster | Ambari 1.7.0

Jeff,

I had a look on the link which you had provided, however I am not sure why it 
didn’t worked for me.

Below is the command which I tried,
Command:
curl --user admin:admin -H "X-Requested-By: ambari" -i -X POST -d 
'{"blueprint_name": "mymasterblueprint", "host_group": "compute"}' 
https://XX.XX.XX.XX:8443/api/v1/clusters/CLUSTER/hosts/vmkdev0027.persistent.com

Response:
HTTP/1.1 400 Bad Request
Set-Cookie: AMBARISESSIONID=15w0nek4yww411pi8iqy70c8u5;Path=/
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Content-Length: 160
Server: Jetty(7.6.7.v20120910)

{
  "status" : 400,
  "message" : "The properties [blueprint, host_group] specified in the request 
or predicate are not supported for the resource type Host."
}

Please let me know if I have missed something.

Note :
vmkdev0027.persistent.com -  Is the host which I need to add,
CLUSTER - Specified in the URL is my cluster name


Thanks & Regards,
Pratik

From: Jeff Sposetti [mailto:j...@hortonworks.com]
Sent: Sunday, May 17, 2015 1:15 PM
To: user@ambari.apache.org<mailto:user@ambari.apache.org>
Subject: Re: Adding Hosts to Existing Cluster | Ambari 1.7.0


Have you looked at using Blueprints API for "add host"?



https://issues.apache.org/jira/browse/AMBARI-8458






From: Pratik Gadiya 
mailto:pratik_gad...@persistent.com>>
Sent: Sunday, May 17, 2015 1:57 AM
To: user@ambari.apache.org<mailto:user@ambari.apache.org>
Subject: Adding Hosts to Existing Cluster | Ambari 1.7.0

Hi All,

I want to add hosts to the existing hadoop cluster which is deployed via ambari 
rest api’s.

For the same, I am referring to link 
https://cwiki.apache.org/confluence/display/AMBARI/Add+a+host+and+deploy+components+using+APIs

In the above link, we can observe that we have to make POST REST calls to 
install the services on the newly added hosts.
Here the number of such REST calls would be equivalent to number of services 
which we want to install (like shown as below )

[cid:image001.png@01D090DF.DAF8A760]

I am wondering, if there is any way by which I can install this many number of 
services such as DATANODE, GANGLIA_MONITOR, NODEMANAGER etc.. in a single REST 
call to the newly added hosts.

Explaining with a small example of that REST call body is much appreciated.

Thanks,
Pratik


DISCLAIMER == This e-mail may contain privileged and confidential 
information which is the property of Persistent Systems Ltd. It is intended 
only for the use of the individual or entity to which it is addressed. If you 
are not the intended recipient, you are not authorized to read, retain, copy, 
print, distribute or use this message. If you have received this communication 
in error, please notify the sender and delete all copies of this message. 
Persistent Systems Ltd. does not accept any liability for virus infected mails.

DISCLAIMER == This e-mail may contain privileged and confidential 
information 

Re: Adding Hosts to Existing Cluster | Ambari 1.7.0

2015-05-17 Thread Jeff Sposetti
Have you looked at using Blueprints API for "add host"?


https://issues.apache.org/jira/browse/AMBARI-8458




From: Pratik Gadiya 
Sent: Sunday, May 17, 2015 1:57 AM
To: user@ambari.apache.org
Subject: Adding Hosts to Existing Cluster | Ambari 1.7.0

Hi All,

I want to add hosts to the existing hadoop cluster which is deployed via ambari 
rest api's.

For the same, I am referring to link 
https://cwiki.apache.org/confluence/display/AMBARI/Add+a+host+and+deploy+components+using+APIs

In the above link, we can observe that we have to make POST REST calls to 
install the services on the newly added hosts.
Here the number of such REST calls would be equivalent to number of services 
which we want to install (like shown as below )

[cid:image001.png@01D09094.6BE4A310]

I am wondering, if there is any way by which I can install this many number of 
services such as DATANODE, GANGLIA_MONITOR, NODEMANAGER etc.. in a single REST 
call to the newly added hosts.

Explaining with a small example of that REST call body is much appreciated.

Thanks,
Pratik


DISCLAIMER == This e-mail may contain privileged and confidential 
information which is the property of Persistent Systems Ltd. It is intended 
only for the use of the individual or entity to which it is addressed. If you 
are not the intended recipient, you are not authorized to read, retain, copy, 
print, distribute or use this message. If you have received this communication 
in error, please notify the sender and delete all copies of this message. 
Persistent Systems Ltd. does not accept any liability for virus infected mails.


Re: Ambari View HBase Client

2015-05-15 Thread Jeff Sposetti
I wonder if you are hitting this issue?

https://issues.apache.org/jira/browse/AMBARI-10748

From: "John.Bork" mailto:john.b...@target.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Friday, May 15, 2015 at 6:30 PM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Ambari View HBase Client

Hi,

I am currently working on an Ambari View that will scan an HBase table on a 
Hadoop cluster. I am having difficulties initializing the connection to HBase 
because the HBaseConfiguration is failing to load the "hbase-default.xml" as a 
resource with the ClassLoader from within the Ambari View Resource Class I 
implemented. It attempts to do so on this line of the HBaseConfiguration class, 
version hbase-common-0.98.4.2.2.4.2-2-hadoop2:


Line 102: conf.addResource("hbase-default.xml");

Which gets loaded in the Configuration class on this line:

Line 2218: return classLoader.getResource(name);

Where classLoader is defined:

Line 660: classLoader = Thread.currentThread().getContextClassLoader();
Or
Line 662: classLoader = Configuration.class.getClassLoader();


I tried adding hbase-default.xml under the resources folder for the Ambari View 
project, but it did not find it there.

Is there a specific location where I can put hbase-default.xml so that it will 
be discovered by the classloader of the Ambari View?

How does the fact that the Ambari View runs in a servlet container influence 
where the ClassLoader checks?



-John Bork



Re: Unable to Change Hive Metastore after Installation

2015-05-08 Thread Jeff Sposetti
HI,

Ambari 2.0 allows you to modify the database connection information from Ambari 
Web. Ambari 2.0 documentation is here 
http://docs.hortonworks.com/HDPDocuments/Ambari-2.0.0.0/index.html and Ambari 
1.7 -> 2.0 upgrade information is here 
http://docs.hortonworks.com/HDPDocuments/Ambari-2.0.0.0/Ambari_Doc_Suite/ADS_v200.html#ref-385ec2d5-0648-4a6b-afed-5e079b6cd608

And here are some abridged instructions once you are on Ambari 2.0 to switch 
from the default MySQL database to use existing databases (such as existing 
MySQL, Oracle or PostgreSQL). Refer to the Ambari Reference 
Guide
 for the prerequisites for setting-up the existing database for Hive:


  1.  After performing the prerequisite steps, in Ambari Web, browse to 
Services > Hive > Configs and select the Existing Database option. For example, 
if you want to switch from default MySQL to an existing Oracle database, select 
the "Existing Oracle Database" option.

  2.  Modify the database configuration settings as appropriate for your 
environment (such as database host, username, password, etc).

  3.  Click the Test Connection button. This will confirm a connection to the 
database is available.

  4.  Save the configuration changes.

  5.  Restart Hive components as instructed for the change to take affect.

Important: this procedure does not export or import any existing Hive 
information from the current database to the new database. You must perform 
that data migration on your own prior to making the Hive Metastore database 
connection change.

Jeff

From: Rahul Verma mailto:rahulverm...@gmail.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Friday, May 8, 2015 at 8:49 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Unable to Change Hive Metastore after Installation

Dear All,

We have installed Apache Ambari Version 1.7.0. We had installed Hive with the 
Hive Metastore database as MySQL.

Currently,we have to move the database from MySQL to Oracle(our application is 
currently using Oracle and we would like the hive metastore also to be hosted 
on Oracle). However, the Ambari UI doesn't allow us to edit the Hive 
Database(the option is non-editable). Please let us know whether we can change 
the metastore database after the installation of the cluster.

Attaching the screenshot of the Hive config page.

I would really appreciate if someone could point me in the right direction.


Re: Enable Oozie High Availability from AMBARI 2.0

2015-04-24 Thread Jeff Sposetti
Are you using Derby as the database? With Derby, the ability to configure a 
second Oozie Server does not show.


Once you switch Oozie from Derby, there will be an option to add a second Oozie 
Server. There is a bit more info here...


http://docs.hortonworks.com/HDPDocuments/Ambari-2.0.0.0/Ambari_Doc_Suite/ADS_v200.html#ref-b983c82c-849a-4b49-822d-076e1d7a4ac4



From: Shaik M 
Sent: Friday, April 24, 2015 4:00 AM
To: user@ambari.apache.org
Subject: Enable Oozie High Availability from AMBARI 2.0

Hi,

I am tying to find to enable Oozie HA from Ambari 2.0. But, I didn't find such 
option to enable Oozie HA.

I am unable to find the documentation about this configuration.

Please advise...

Thanks,
Shaik M


Re: Ambari 2.0 Email Notifications not receiving

2015-04-23 Thread Jeff Sposetti
Hi, Did you restart Ambari Server? Wonder if you are hitting this...

https://issues.apache.org/jira/browse/AMBARI-9823

From: Shaik M mailto:munna.had...@gmail.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Thursday, April 23, 2015 at 6:00 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Ambari 2.0 Email Notifications not receiving

Hi,

I have deployed Ambari 2.0 with HDP 2.2.4.

I am have created new Notification for Email and selected all Groups & Severity.

But I am not receiving any Alerts notifications when the alert published to 
provided email address.

Please help me to resolve this issue.

Thanks,
Shaik


Re: Ambari 2.0 - Storm not starting

2015-04-18 Thread Jeff Sposetti
For the Kafka issue, I'm wondering if this helps? Also, please confirm you are 
using HDP 2.2 (specifically HDP 2.2.0.0)?

Thanks.



If you are managing a HDP 2.2 cluster that includes Kafka, you must adjust the 
Kafka configuration to send metrics to the Ambari Metrics system. From Ambari 
Web, browse to Services > Kafka > Configs and edit the kafka-env template found 
under Advanced kafka-env to include the following: :

# Add kafka sink to classpath and related dependencies
if [ -e "/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar" ]; 
then
export 
CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/ambari-metrics-kafka-sink.jar
export CLASSPATH=$CLASSPATH:/usr/lib/ambari-metrics-kafka-sink/lib/* fi



From: Frank Eisenhauer 
Sent: Saturday, April 18, 2015 8:32 AM
To: Ambari User
Subject: Ambari 2.0 - Storm not starting

Hi All,

are there any known incompatibilities between Ambari 2.0.0 and Kafka/Storm?
Since the Update to Ambari 2.0 Kafka and Storm Services are failing on
start.

There are a lot of error entries in Storm nimbus.log:

2015-04-18 14:34:08 b.s.d.nimbus [ERROR] Error when processing event
java.lang.NullPointerException: null
 at clojure.lang.Numbers.ops(Numbers.java:942)
~[clojure-1.5.1.jar:na]
 at clojure.lang.Numbers.isZero(Numbers.java:90)
~[clojure-1.5.1.jar:na]
 at backtype.storm.util$partition_fixed.invoke(util.clj:868)
~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
 at clojure.lang.AFn.applyToHelper(AFn.java:163)
[clojure-1.5.1.jar:na]
 at clojure.lang.AFn.applyTo(AFn.java:151) [clojure-1.5.1.jar:na]
 at clojure.core$apply.invoke(core.clj:617) ~[clojure-1.5.1.jar:na]
 at clojure.lang.AFn.applyToHelper(AFn.java:163)
[clojure-1.5.1.jar:na]
 at clojure.lang.RestFn.applyTo(RestFn.java:132)
~[clojure-1.5.1.jar:na]
 at clojure.core$apply.invoke(core.clj:619) ~[clojure-1.5.1.jar:na]
 at clojure.core$partial$fn__4190.doInvoke(core.clj:2396)
~[clojure-1.5.1.jar:na]
 at clojure.lang.RestFn.invoke(RestFn.java:408)
~[clojure-1.5.1.jar:na]
 at
backtype.storm.util$map_val$iter__274__278$fn__279.invoke(util.clj:291)
~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
 at clojure.lang.LazySeq.sval(LazySeq.java:42)
~[clojure-1.5.1.jar:na]
 at clojure.lang.LazySeq.seq(LazySeq.java:60)
~[clojure-1.5.1.jar:na]
 at clojure.lang.Cons.next(Cons.java:39) ~[clojure-1.5.1.jar:na]
 at clojure.lang.RT.next(RT.java:598) ~[clojure-1.5.1.jar:na]
 at clojure.core$next.invoke(core.clj:64) ~[clojure-1.5.1.jar:na]
 at clojure.core.protocols$fn__6034.invoke(protocols.clj:146)
~[clojure-1.5.1.jar:na]
 at
clojure.core.protocols$fn__6005$G__6000__6014.invoke(protocols.clj:19)
~[clojure-1.5.1.jar:na]
 at clojure.core.protocols$seq_reduce.invoke(protocols.clj:31)
~[clojure-1.5.1.jar:na]
 at clojure.core.protocols$fn__6026.invoke(protocols.clj:54)
~[clojure-1.5.1.jar:na]
 at
clojure.core.protocols$fn__5979$G__5974__5992.invoke(protocols.clj:13)
~[clojure-1.5.1.jar:na]
 at clojure.core$reduce.invoke(core.clj:6177)
~[clojure-1.5.1.jar:na]
 at clojure.core$into.invoke(core.clj:6229) ~[clojure-1.5.1.jar:na]
 at backtype.storm.util$map_val.invoke(util.clj:290)
~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
 at
backtype.storm.daemon.nimbus$compute_executors.invoke(nimbus.clj:435)
~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
 at
backtype.storm.daemon.nimbus$compute_executor__GT_component.invoke(nimbus.clj:446)
~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
 at
backtype.storm.daemon.nimbus$read_topology_details.invoke(nimbus.clj:339) 
~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
 at
backtype.storm.daemon.nimbus$mk_assignments$iter__6522__6526$fn__6527.invoke(nimbus.clj:665)
~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
 at clojure.lang.LazySeq.sval(LazySeq.java:42)
~[clojure-1.5.1.jar:na]
 at clojure.lang.LazySeq.seq(LazySeq.java:60)
~[clojure-1.5.1.jar:na]
 at clojure.lang.RT.seq(RT.java:484) ~[clojure-1.5.1.jar:na]
 at clojure.core$seq.invoke(core.clj:133) ~[clojure-1.5.1.jar:na]
 at clojure.core.protocols$seq_reduce.invoke(protocols.clj:30)
~[clojure-1.5.1.jar:na]
 at clojure.core.protocols$fn__6026.invoke(protocols.clj:54)
~[clojure-1.5.1.jar:na]
 at
clojure.core.protocols$fn__5979$G__5974__5992.invoke(protocols.clj:13)
~[clojure-1.5.1.jar:na]
 at clojure.core$reduce.invoke(core.clj:6177)
~[clojure-1.5.1.jar:na]
 at clojure.core$into.invoke(core.clj:6229) ~[clojure-1.5.1.jar:na]
 at
backtype.storm.daemon.nimbus$mk_assignments.doInvoke(nimbus.clj:664)
~[storm-core-0.9.3.2.2.0.0-2041.jar:0.9.3.2.2.0.0-2041]
 at clojure.lang.RestFn.invoke(RestFn.java:410)
~[clojure-1.5.1.jar:na]
 at

Re: Ambari 2.0 Kerberos Activation - Failed to create keytab

2015-04-17 Thread Jeff Sposetti
Hi, Are you running your Ambari Server as non-root?

https://issues.apache.org/jira/browse/AMBARI-10266

You might be hitting that BUG.

On 4/17/15, 3:41 PM, "Frank Eisenhauer"  wrote:

>Hi All,
>I'm trying to enable Kerberos in Ambari 2.0.0 after upgrade from Ambari
>1.7.
>
>During "Test Kerberos Client" I'm getting the error "Failed to create
>keytab file for ambari-qa_idhey...@bigdata.xxx - Failed to export keytab
>file"
>
>The ambari-server.log states:
>17 Apr 2015 21:41:29,601  INFO [Server Action Executor Worker 4215]
>CreateKeytabFilesServerAction:170 - Creating keytab file for
>ambari-qa_idheyfiu@BIGDATA$
>17 Apr 2015 21:41:29,636 ERROR [Server Action Executor Worker 4215]
>KerberosOperationHandler:433 - Failed to export keytab file
>java.io.FileNotFoundException:
>/var/lib/ambari-server/data/tmp/.ambari_1429299679291-0.d/HADOOP-SRV01/4e6
>d850833d0d36946b1c5c5b260bec371c5247c
>(Pe$
> at java.io.FileOutputStream.open(Native Method)
> at java.io.FileOutputStream.(FileOutputStream.java:221)
> at 
>org.apache.directory.server.kerberos.shared.keytab.Keytab.writeFile(Keytab
>.java:273)
> at 
>org.apache.directory.server.kerberos.shared.keytab.Keytab.write(Keytab.jav
>a:133)
> at 
>org.apache.ambari.server.serveraction.kerberos.KerberosOperationHandler.cr
>eateKeytabFile(KerberosOperationHandler.java:429)
> at 
>org.apache.ambari.server.serveraction.kerberos.CreateKeytabFilesServerActi
>on.processIdentity(CreateKeytabFilesServerAction.java:276)
> at 
>org.apache.ambari.server.serveraction.kerberos.KerberosServerAction.proces
>sRecord(KerberosServerAction.java:494)
> at 
>org.apache.ambari.server.serveraction.kerberos.KerberosServerAction.proces
>sIdentities(KerberosServerAction.java:386)
> at 
>org.apache.ambari.server.serveraction.kerberos.CreateKeytabFilesServerActi
>on.execute(CreateKeytabFilesServerAction.java:99)
> at 
>org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.execute(
>ServerActionExecutor.java:504)
> at 
>org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.run(Serv
>erActionExecutor.java:441)
> at java.lang.Thread.run(Thread.java:744)
>17 Apr 2015 21:41:29,637 ERROR [Server Action Executor Worker 4215]
>CreateKeytabFilesServerAction:290 - Failed to create keytab file for
>ambari-qa_idheyfiu$
>org.apache.ambari.server.serveraction.kerberos.KerberosOperationException:
> 
>Failed to export keytab file
> at 
>org.apache.ambari.server.serveraction.kerberos.KerberosOperationHandler.cr
>eateKeytabFile(KerberosOperationHandler.java:439)
> at 
>org.apache.ambari.server.serveraction.kerberos.CreateKeytabFilesServerActi
>on.processIdentity(CreateKeytabFilesServerAction.java:276)
> at 
>org.apache.ambari.server.serveraction.kerberos.KerberosServerAction.proces
>sRecord(KerberosServerAction.java:494)
> at 
>org.apache.ambari.server.serveraction.kerberos.KerberosServerAction.proces
>sIdentities(KerberosServerAction.java:386)
> at 
>org.apache.ambari.server.serveraction.kerberos.CreateKeytabFilesServerActi
>on.execute(CreateKeytabFilesServerAction.java:99)
> at 
>org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.execute(
>ServerActionExecutor.java:504)
> at 
>org.apache.ambari.server.serveraction.ServerActionExecutor$Worker.run(Serv
>erActionExecutor.java:441)
> at java.lang.Thread.run(Thread.java:744)
>Caused by: java.io.FileNotFoundException:
>/var/lib/ambari-server/data/tmp/.ambari_1429299679291-0.d/HADOOP-SRV01/4e6
>d850833d0d36946b1c5c5b260bec37$
> at java.io.FileOutputStream.open(Native Method)
> at java.io.FileOutputStream.(FileOutputStream.java:221)
> at 
>org.apache.directory.server.kerberos.shared.keytab.Keytab.writeFile(Keytab
>.java:273)
> at 
>org.apache.directory.server.kerberos.shared.keytab.Keytab.write(Keytab.jav
>a:133)
> at 
>org.apache.ambari.server.serveraction.kerberos.KerberosOperationHandler.cr
>eateKeytabFile(KerberosOperationHandler.java:429)
> ... 7 more
>
>I've found a Jira Log
>"https://issues.apache.org/jira/browse/AMBARI-10266"; but the mentioned
>solution does not solve the issue. The permission denied exception still
>occurs.
>Ambari Server is running as root.
>



Re: ambari-server setup

2015-04-03 Thread Jeff Sposetti
You need to install the JDK on all hosts to the same location and use option 
[3] below during setup to enter that JDK path.


That way, Ambari will not attempt to download the JDK on the Ambari Server or 
any host, and will instead use the custom, pre-installed JDK path you specify.


Note: If you plan to enable Kerberos on the cluster in the future, be sure to 
also install the JCE with your JDK on the Ambari Server and on all hosts.



From: Pratik Gadiya 
Sent: Friday, April 03, 2015 6:24 AM
To: user@ambari.apache.org
Subject: ambari-server setup

Hi All,

I don't want to download all the time the latest Oracle JDK while setting up 
ambari-server.
So, can someone let me know what all steps should I follow in order to achieve 
this in automation ?

# ambari-server setup
Using python  /usr/bin/python2.6
Setup ambari-server
Checking SELinux...
SELinux status is 'disabled'
Ambari-server daemon is configured to run under user 'root'. Change this 
setting [y/n] (n)?
Adjusting ambari-server permissions and ownership...
Checking firewall...
Checking JDK...
[1] - Oracle JDK 1.7 + Java Cryptography Extension (JCE) Policy Files 7
[2] - Oracle JDK 1.6 + Java Cryptography Extension (JCE) Policy Files 6
[3] - Custom JDK
==
Enter choice (1): 3

I need this because my machine would not be having the access to the public 
repo all the time, which will end up giving me errors.

Please let me know how can I do this.

Help Appreciated !!

Thanks & Regards,
Pratik Gadiya

DISCLAIMER == This e-mail may contain privileged and confidential 
information which is the property of Persistent Systems Ltd. It is intended 
only for the use of the individual or entity to which it is addressed. If you 
are not the intended recipient, you are not authorized to read, retain, copy, 
print, distribute or use this message. If you have received this communication 
in error, please notify the sender and delete all copies of this message. 
Persistent Systems Ltd. does not accept any liability for virus infected mails.


Re: COMMERCIAL:Re: Did something get broken for webhcat today?

2015-03-18 Thread Jeff Sposetti
See if the API call here helps…might be what you are looking for…

https://cwiki.apache.org/confluence/display/AMBARI/Blueprints#Blueprints-Step4:SetupStackRepositories%28Optional%29



From: Greg Hill mailto:greg.h...@rackspace.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 1:11 PM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Re: COMMERCIAL:Re: Did something get broken for webhcat today?

Ok, I'll see if I can figure out the API equivalent.  We are automating 
everything since we provide hdp clusters as a service.

Greg

From: Yusaku Sako mailto:yus...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 12:06 PM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: COMMERCIAL:Re: Did something get broken for webhcat today?

Greg,

Ambari does automatically retrieve the repo info for the latest maintenance 
version of the stack.
For example, if you select "HDP 2.2", it will pull the latest HDP 2.2.x version.
It seems like HDP 2.2.3 was released last night, so when you are installing a 
new cluster it is trying to install with 2.2.3.
Since you already have HDP 2.2.0 bits pre-installed on your image, you need to 
explicitly set the repo URL to 2.2.0 bits in the Select Stack page, as Jeff 
mentioned.

This is only true for new clusters being installed.
For adding hosts to existing clusters, it will continue to use the repo URL 
that you originally used to install the cluster with.

Yusaku

From: Greg Hill mailto:greg.h...@rackspace.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Thursday, March 19, 2015 1:56 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Re: Did something get broken for webhcat today?

We did install that repo when we built the images we're using:

wget -O /etc/yum.repos.d/hdp.repo 
http://public-repo-1.hortonworks.com/HDP/centos6/2.x/GA/2.2.0.0/hdp.repo

We preinstall a lot of packages on the images to reduce install time, including 
ambari.  So our version of Ambari didn't change, and we didn't inject those new 
repos.  Does ambari self-update or phone home to get the latest repos?  I can't 
figure out how the new repo got injected.

Greg


From: Jeff Sposetti mailto:j...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 11:48 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: COMMERCIAL:Re: Did something get broken for webhcat today?


In Ambari Web > Admin > Stack (or during install, on Select Stack, expand 
Advanced Repository Options): can you update your HDP repo Base URL to use the 
HDP 2.2 GA repository (instead of what it's pulling, which is 2.2.3.0)?


http://public-repo-1.hortonworks.com/HDP/centos6/2.x/GA/2.2.0.0



From: Greg Hill mailto:greg.h...@rackspace.com>>
Sent: Wednesday, March 18, 2015 12:41 PM
To: user@ambari.apache.org<mailto:user@ambari.apache.org>
Subject: Re: Did something get broken for webhcat today?

We didn't change anything.  Ambari 1.7.0, HDP 2.2.  Repos are:

[root@gateway-1 ~]# cat /etc/yum.repos.d/HDP.repo
[HDP-2.2]
name=HDP
baseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.2.3.0
path=/
enabled=1
gpgcheck=0
[root@gateway-1 ~]# cat /etc/yum.repos.d/HDP-UTILS.repo
[HDP-UTILS-1.1.0.20]
name=HDP-UTILS
baseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos6
path=/
enabled=1
gpgcheck=0
[root@gateway-1 ~]# cat /etc/yum.repos.d/ambari.repo
[ambari-1.x]
name=Ambari 1.x
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/GA
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1

[Updates-ambari-1.7.0]
name=ambari-1.7.0 - Updates
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1



From: Jeff Sposetti mailto:j...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 11:26 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: COMMERCIAL:Re: D

Re: Did something get broken for webhcat today?

2015-03-18 Thread Jeff Sposetti
In Ambari Web > Admin > Stack (or during install, on Select Stack, expand 
Advanced Repository Options): can you update your HDP repo Base URL to use the 
HDP 2.2 GA repository (instead of what it's pulling, which is 2.2.3.0)?


http://public-repo-1.hortonworks.com/HDP/centos6/2.x/GA/2.2.0.0



From: Greg Hill 
Sent: Wednesday, March 18, 2015 12:41 PM
To: user@ambari.apache.org
Subject: Re: Did something get broken for webhcat today?

We didn't change anything.  Ambari 1.7.0, HDP 2.2.  Repos are:

[root@gateway-1 ~]# cat /etc/yum.repos.d/HDP.repo
[HDP-2.2]
name=HDP
baseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.2.3.0
path=/
enabled=1
gpgcheck=0
[root@gateway-1 ~]# cat /etc/yum.repos.d/HDP-UTILS.repo
[HDP-UTILS-1.1.0.20]
name=HDP-UTILS
baseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/centos6
path=/
enabled=1
gpgcheck=0
[root@gateway-1 ~]# cat /etc/yum.repos.d/ambari.repo
[ambari-1.x]
name=Ambari 1.x
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/GA
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1

[Updates-ambari-1.7.0]
name=ambari-1.7.0 - Updates
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/updates/1.7.0
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1



From: Jeff Sposetti mailto:j...@hortonworks.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 11:26 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: COMMERCIAL:Re: Did something get broken for webhcat today?

Are you using ambari trunk or ambari 2.0.0 branch builds?

Also please confirm: your HDP repos have not changed (I.e. Are you using local 
repos for the HDP stack packages)?

From: Greg Hill mailto:greg.h...@rackspace.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 12:22 PM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: Did something get broken for webhcat today?

Starting this morning, we started seeing this on every single install.  I think 
someone at Hortonworks pushed out a broken RPM or something.  Any ideas?  This 
is rather urgent as we are no longer able to provision HDP 2.2 clusters at all 
because of it.


2015-03-18 15:58:05,982 - Group['hadoop'] {'ignore_failures': False}
2015-03-18 15:58:05,984 - Modifying group hadoop
2015-03-18 15:58:06,080 - Group['nobody'] {'ignore_failures': False}
2015-03-18 15:58:06,081 - Modifying group nobody
2015-03-18 15:58:06,219 - Group['users'] {'ignore_failures': False}
2015-03-18 15:58:06,220 - Modifying group users
2015-03-18 15:58:06,370 - Group['nagios'] {'ignore_failures': False}
2015-03-18 15:58:06,371 - Modifying group nagios
2015-03-18 15:58:06,474 - User['nobody'] {'gid': 'hadoop', 'ignore_failures': 
False, 'groups': [u'nobody']}
2015-03-18 15:58:06,475 - Modifying user nobody
2015-03-18 15:58:06,558 - User['hive'] {'gid': 'hadoop', 'ignore_failures': 
False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,559 - Modifying user hive
2015-03-18 15:58:06,634 - User['mapred'] {'gid': 'hadoop', 'ignore_failures': 
False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,635 - Modifying user mapred
2015-03-18 15:58:06,722 - User['nagios'] {'gid': 'nagios', 'ignore_failures': 
False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,723 - Modifying user nagios
2015-03-18 15:58:06,841 - User['ambari-qa'] {'gid': 'hadoop', 
'ignore_failures': False, 'groups': [u'users']}
2015-03-18 15:58:06,842 - Modifying user ambari-qa
2015-03-18 15:58:06,963 - User['zookeeper'] {'gid': 'hadoop', 
'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,964 - Modifying user zookeeper
2015-03-18 15:58:07,093 - User['tez'] {'gid': 'hadoop', 'ignore_failures': 
False, 'groups': [u'users']}
2015-03-18 15:58:07,094 - Modifying user tez
2015-03-18 15:58:07,217 - User['hdfs'] {'gid': 'hadoop', 'ignore_failures': 
False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,218 - Modifying user hdfs
2015-03-18 15:58:07,354 - User['

Re: Did something get broken for webhcat today?

2015-03-18 Thread Jeff Sposetti
Are you using ambari trunk or ambari 2.0.0 branch builds?

Also please confirm: your HDP repos have not changed (I.e. Are you using local 
repos for the HDP stack packages)?

From: Greg Hill mailto:greg.h...@rackspace.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Wednesday, March 18, 2015 at 12:22 PM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Did something get broken for webhcat today?

Starting this morning, we started seeing this on every single install.  I think 
someone at Hortonworks pushed out a broken RPM or something.  Any ideas?  This 
is rather urgent as we are no longer able to provision HDP 2.2 clusters at all 
because of it.


2015-03-18 15:58:05,982 - Group['hadoop'] {'ignore_failures': False}
2015-03-18 15:58:05,984 - Modifying group hadoop
2015-03-18 15:58:06,080 - Group['nobody'] {'ignore_failures': False}
2015-03-18 15:58:06,081 - Modifying group nobody
2015-03-18 15:58:06,219 - Group['users'] {'ignore_failures': False}
2015-03-18 15:58:06,220 - Modifying group users
2015-03-18 15:58:06,370 - Group['nagios'] {'ignore_failures': False}
2015-03-18 15:58:06,371 - Modifying group nagios
2015-03-18 15:58:06,474 - User['nobody'] {'gid': 'hadoop', 'ignore_failures': 
False, 'groups': [u'nobody']}
2015-03-18 15:58:06,475 - Modifying user nobody
2015-03-18 15:58:06,558 - User['hive'] {'gid': 'hadoop', 'ignore_failures': 
False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,559 - Modifying user hive
2015-03-18 15:58:06,634 - User['mapred'] {'gid': 'hadoop', 'ignore_failures': 
False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,635 - Modifying user mapred
2015-03-18 15:58:06,722 - User['nagios'] {'gid': 'nagios', 'ignore_failures': 
False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,723 - Modifying user nagios
2015-03-18 15:58:06,841 - User['ambari-qa'] {'gid': 'hadoop', 
'ignore_failures': False, 'groups': [u'users']}
2015-03-18 15:58:06,842 - Modifying user ambari-qa
2015-03-18 15:58:06,963 - User['zookeeper'] {'gid': 'hadoop', 
'ignore_failures': False, 'groups': [u'hadoop']}
2015-03-18 15:58:06,964 - Modifying user zookeeper
2015-03-18 15:58:07,093 - User['tez'] {'gid': 'hadoop', 'ignore_failures': 
False, 'groups': [u'users']}
2015-03-18 15:58:07,094 - Modifying user tez
2015-03-18 15:58:07,217 - User['hdfs'] {'gid': 'hadoop', 'ignore_failures': 
False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,218 - Modifying user hdfs
2015-03-18 15:58:07,354 - User['yarn'] {'gid': 'hadoop', 'ignore_failures': 
False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,355 - Modifying user yarn
2015-03-18 15:58:07,485 - User['hcat'] {'gid': 'hadoop', 'ignore_failures': 
False, 'groups': [u'hadoop']}
2015-03-18 15:58:07,486 - Modifying user hcat
2015-03-18 15:58:07,629 - File['/var/lib/ambari-agent/data/tmp/changeUid.sh'] 
{'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2015-03-18 15:58:07,631 - Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh 
ambari-qa 
/tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa
 2>/dev/null'] {'not_if': 'test $(id -u ambari-qa) -gt 1000'}
2015-03-18 15:58:07,768 - Skipping 
Execute['/var/lib/ambari-agent/data/tmp/changeUid.sh ambari-qa 
/tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa
 2>/dev/null'] due to not_if
2015-03-18 15:58:07,769 - Directory['/etc/hadoop/conf.empty'] {'owner': 'root', 
'group': 'root', 'recursive': True}
2015-03-18 15:58:07,770 - Link['/etc/hadoop/conf'] {'not_if': 'ls 
/etc/hadoop/conf', 'to': '/etc/hadoop/conf.empty'}
2015-03-18 15:58:07,895 - Skipping Link['/etc/hadoop/conf'] due to not_if
2015-03-18 15:58:07,960 - File['/etc/hadoop/conf/hadoop-env.sh'] {'content': 
InlineTemplate(...), 'owner': 'hdfs'}
2015-03-18 15:58:08,092 - Execute['/bin/echo 0 > /selinux/enforce'] {'only_if': 
'test -f /selinux/enforce'}
2015-03-18 15:58:08,240 - Skipping Execute['/bin/echo 0 > /selinux/enforce'] 
due to only_if
2015-03-18 15:58:08,241 - Directory['/var/log/hadoop'] {'owner': 'root', 
'group': 'hadoop', 'mode': 0775, 'recursive': True}
2015-03-18 15:58:08,244 - Directory['/var/run/hadoop'] {'owner': 'root', 
'group': 'root', 'recursive': True}
2015-03-18 15:58:08,250 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 
'recursive': True}
2015-03-18 15:58:08,278 - File['/etc/hadoop/conf/commons-logging.properties'] 
{'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,288 - File['/etc/hadoop/conf/health_check'] {'content': 
Template('health_check-v2.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,295 - File['/etc/hadoop/conf/log4j.properties'] {'content': 
'...', 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2015-03-18 15:58:08,322 - File['/etc/hadoop/conf/hadoop-metrics2.properties'] 
{'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs'}
2015-03-18 15:58:08,325 - File['/etc/hadoop/conf/task

Re: COMMERCIAL:RE: Server Restarts

2015-02-19 Thread Jeff Sposetti
Correct, you have to handle the components manually. The chkconfig works to get 
the Ambari Agent itself restarted.

There is a JIRA for this feature improvement if you want to follow.

https://issues.apache.org/jira/browse/AMBARI-2330

From: Greg Hill mailto:greg.h...@rackspace.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Thursday, February 19, 2015 at 3:02 PM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Re: COMMERCIAL:RE: Server Restarts

That won't make the agent auto-start components on restart.  Afaik, you have to 
do that manually.

Greg

From: johny casanova mailto:pcgamer2...@outlook.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Thursday, February 19, 2015 at 7:45 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: COMMERCIAL:RE: Server Restarts

chkconfig "service" on


Date: Thu, 19 Feb 2015 06:41:23 -0600
Subject: Server Restarts
From: daniel.j.cies...@gmail.com
To: user@ambari.apache.org

How does one ensure that when Ambari clients are rebooted that the services 
that Ambari manages are started automatically?

Thanks
Dan


Re: Error during HDP installation - App timeline server

2015-02-18 Thread Jeff Sposetti
The error is indicating it’s can’t locate the packages but the .repo files look 
right, and you have the repositories.

What happens if from command line you run:

yum -y install hadoop_2_2_*-yarn

From: Joshi Omkar mailto:omkar.jo...@scania.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Wednesday, February 18, 2015 at 11:20 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: RE: Error during HDP installation - App timeline server

I’m able to access 
http://l1032lab.sss.se.scania.com/hdp/HDP/centos6/2.x/GA/2.2.0.0/
And
http://l1032lab.sss.se.scania.com/hdp/HDP/centos6/2.x/GA/2.2.0.0/repodata/repomd.xml

from the browser.


Can you check the error log that I’m getting(posted in the original mail) – its 
failing for hadoop rpms which I’m able to access as mentioned in

http://docs.hortonworks.com/HDPDocuments/Ambari-1.7.0.0/AMBARI_DOC_SUITE/index.html#Item2.6


From: Jeff Sposetti [mailto:j...@hortonworks.com]
Sent: den 18 februari 2015 17:14
To: user@ambari.apache.org<mailto:user@ambari.apache.org>
Subject: Re: Error during HDP installation - App timeline server

Hi,

To confirm: you built your local repositories on this host?
http://l1032lab.sss.se.scania.com/hdp/HDP/centos6/2.x/GA/2.2.0.0/

To check if there is a repo properly, you can browse this file (which is repo 
metadata)?
http://l1032lab.sss.se.scania.com/hdp/HDP/centos6/2.x/GA/2.2.0.0/repodata/repomd.xml

Instructions for building the local repos:
http://docs.hortonworks.com/HDPDocuments/Ambari-1.7.0.0/AMBARI_DOC_SUITE/index.html#Item2.6


From: Joshi Omkar mailto:omkar.jo...@scania.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Date: Wednesday, February 18, 2015 at 10:59 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
Subject: RE: Error during HDP installation - App timeline server

Yeah I have read that but the issue I’m getting is specific to repos. config. 
which I’m not sure the doc. can help to solve.

From: Devopam Mittra [mailto:devo...@gmail.com]
Sent: den 18 februari 2015 16:56
To: user@ambari.apache.org<mailto:user@ambari.apache.org>
Subject: Re: Error during HDP installation - App timeline server

#IMHO you may actually refer to Hortonworks' instruction manual for 
installation / upgrade to see if things are fine... might help

regards
Devopam

On Wed, Feb 18, 2015 at 9:21 PM, Joshi Omkar 
mailto:omkar.jo...@scania.com>> wrote:
Using Ambari 1.7, I’m trying to install the HDP 2.2 stack on 9 nodes.

The HDP.repo is :

[HDP-2.2]
name=HDP
baseurl=http://l1032lab.sss.se.scania.com/hdp/HDP/centos6/2.x/GA/2.2.0.0/
path=/
enabled=1
gpgcheck=0


The ambari.conf file is :

[ambari-1.x]
name=Ambari 1.x
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/GA
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=0
priority=1

[Updates-ambari-1.7.0]
name=ambari-1.7.0 - Updates
baseurl=http://l1032lab.sss.se.scania.com/ambari/centos6/1.x/updates/1.7.0/
gpgcheck=0
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1

I get several warnings and one error for the App Timeline Server whose log is 
given below but the rpm 
(hadoop/hadoop_2_2_0_0_2041-yarn-2.6.0.2.2.0.0-2041.el6.x86_64.rpm) exists and 
I’m able to see it even in the browser :

stderr:
2015-02-18 16:28:28,134 - Error while executing command 'install':
Traceback (most recent call last):
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 123, in execute
method(env)
  File 
"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/application_timeline_server.py",
 line 30, in install
self.install_packages(env)
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 188, in install_packages
Package(name)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
line 148, in __init__
self.env.run()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 149, in run
self.run_action(resource, action)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 115, in run_action
provider_action()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py",
 line 40, in action_install
self.install_package(package_name)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py",
 line 36, in install_package
shell.checked_call(cmd)
  File "/usr/lib/pyt

Re: Error during HDP installation - App timeline server

2015-02-18 Thread Jeff Sposetti
Hi,

To confirm: you built your local repositories on this host?
http://l1032lab.sss.se.scania.com/hdp/HDP/centos6/2.x/GA/2.2.0.0/

To check if there is a repo properly, you can browse this file (which is repo 
metadata)?
http://l1032lab.sss.se.scania.com/hdp/HDP/centos6/2.x/GA/2.2.0.0/repodata/repomd.xml

Instructions for building the local repos:
http://docs.hortonworks.com/HDPDocuments/Ambari-1.7.0.0/AMBARI_DOC_SUITE/index.html#Item2.6


From: Joshi Omkar mailto:omkar.jo...@scania.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Wednesday, February 18, 2015 at 10:59 AM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: RE: Error during HDP installation - App timeline server

Yeah I have read that but the issue I’m getting is specific to repos. config. 
which I’m not sure the doc. can help to solve.

From: Devopam Mittra [mailto:devo...@gmail.com]
Sent: den 18 februari 2015 16:56
To: user@ambari.apache.org
Subject: Re: Error during HDP installation - App timeline server

#IMHO you may actually refer to Hortonworks' instruction manual for 
installation / upgrade to see if things are fine... might help

regards
Devopam

On Wed, Feb 18, 2015 at 9:21 PM, Joshi Omkar 
mailto:omkar.jo...@scania.com>> wrote:
Using Ambari 1.7, I’m trying to install the HDP 2.2 stack on 9 nodes.

The HDP.repo is :

[HDP-2.2]
name=HDP
baseurl=http://l1032lab.sss.se.scania.com/hdp/HDP/centos6/2.x/GA/2.2.0.0/
path=/
enabled=1
gpgcheck=0


The ambari.conf file is :

[ambari-1.x]
name=Ambari 1.x
baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/1.x/GA
gpgcheck=1
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=0
priority=1

[Updates-ambari-1.7.0]
name=ambari-1.7.0 - Updates
baseurl=http://l1032lab.sss.se.scania.com/ambari/centos6/1.x/updates/1.7.0/
gpgcheck=0
gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins
enabled=1
priority=1

I get several warnings and one error for the App Timeline Server whose log is 
given below but the rpm 
(hadoop/hadoop_2_2_0_0_2041-yarn-2.6.0.2.2.0.0-2041.el6.x86_64.rpm) exists and 
I’m able to see it even in the browser :

stderr:
2015-02-18 16:28:28,134 - Error while executing command 'install':
Traceback (most recent call last):
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 123, in execute
method(env)
  File 
"/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/services/YARN/package/scripts/application_timeline_server.py",
 line 30, in install
self.install_packages(env)
  File 
"/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
 line 188, in install_packages
Package(name)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
line 148, in __init__
self.env.run()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 149, in run
self.run_action(resource, action)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
line 115, in run_action
provider_action()
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py",
 line 40, in action_install
self.install_package(package_name)
  File 
"/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py",
 line 36, in install_package
shell.checked_call(cmd)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 36, in checked_call
return _call(command, logoutput, True, cwd, env, preexec_fn, user, 
wait_for_finish, timeout, path)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
line 102, in _call
raise Fail(err_msg)
Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install hadoop_2_2_*-yarn' 
returned 1. Error Downloading Packages:
  hadoop_2_2_0_0_2041-yarn-2.6.0.2.2.0.0-2041.el6.x86_64: failure: 
hadoop/hadoop_2_2_0_0_2041-yarn-2.6.0.2.2.0.0-2041.el6.x86_64.rpm from HDP-2.2: 
[Errno 256] No more mirrors to try.
stdout:
2015-02-18 16:27:46,888 - Execute['mkdir -p 
/var/lib/ambari-agent/data/tmp/AMBARI-artifacts/; curl -kf -x "" --retry 10 

http://l1032lab.sss.se.scania.com:8080/resources//UnlimitedJCEPolicyJDK7.zip
 -o 
/var/lib/ambari-agent/data/tmp/AMBARI-artifacts//UnlimitedJCEPolicyJDK7.zip'] 
{'environment': ..., 'not_if': 'test -e 
/var/lib/ambari-agent/data/tmp/AMBARI-artifacts//UnlimitedJCEPolicyJDK7.zip', 
'ignore_failures': True, 'path': ['/bin', '/usr/bin/']}
2015-02-18 16:27:46,928 - Skipping Execute['mkdir -p 
/var/lib/ambari-agent/data/tmp/AMBARI-artifacts/; curl -kf -x "" --retry 10 

http://l1032lab.sss.se.scania.com:8080/resources//UnlimitedJCEPolicyJDK7.zip

Re: Ambari Views Client-Server Interaction Preference

2015-02-17 Thread Jeff Sposetti
Hi,

Take a look at this example. In this case, it builds the path in the index.html 
page (doesn't use dot-dots but uses absolute paths)

https://github.com/apache/ambari/blob/trunk/ambari-views/examples/simple-view/src/main/resources/ui/index.html

Best not to hardcode the instance name.

Jeff

From: "John.Bork" mailto:john.b...@target.com>>
Reply-To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Date: Monday, February 16, 2015 at 5:13 PM
To: "user@ambari.apache.org" 
mailto:user@ambari.apache.org>>
Subject: Ambari Views Client-Server Interaction Preference

Hi I was wondering if is it preferred for an index.html of an Ambari View to 
make dynamic calls to an Ambari View Instance via a hard coded relative path, 
or to generate the relative path for the index? For example, an Ambari View I 
am working on loads a index.html for the view and the user can instantiate AJAX 
calls to the Ambari View API from the index.html. Currently the AJAX call uses 
a relative path that consists of "..//api/v1/rest/of/path" embedded in the index.html. Would it be better to 
build this path from the Java class method in the Ambari View Server instead of 
having it embedded in the index.html?


John Bork





Re: Java version upgrade

2015-02-16 Thread Jeff Sposetti
Running "ambari-server setup" should not disturb your running cluster or 
cluster configuration by re-running setup. When prompted, modify the JDK choice 
and then do not modify anything else.


A couple other notes...


1) Be sure the Custom JDK path is correct on all hosts in the cluster.

2) From Ambari Web, restart your services for the new JDK path to start being 
used.

3) After you run setup, you can check the java.home property from the Ambari 
Server...


http://c6401.ambari.apache.org:8080/api/v1/services/AMBARI/components/AMBARI_SERVER



From: Giovanni Paolo Gibilisco 
Sent: Monday, February 16, 2015 1:09 PM
To: user@ambari.apache.org
Subject: Re: Java version upgrade

Thanks for your reply,
as I understood running ambari-server setup and selecting the jdk (or using the 
-j argument as shown in the guide 
https://ambari.apache.org/1.2.2/installing-hadoop-using-ambari/content/ambari-chap2-2-1.html)
 will reset my current ambari installation so I will loose th configuration of 
the cluster that is alread running.
Is this correct? If so I can not re-run the setup otherwise I'll have to 
reconfigure the entire cluster.
Thanks.
Giovanni


On Mon Feb 16 2015 at 14:42:45 Dmitry Sen 
mailto:d...@hortonworks.com>> wrote:

Hi Giovanni,


Ambari supports customized JDK :


# ambari-server setup

Using python  /usr/bin/python2.6
Setup ambari-server
Checking SELinux...
SELinux status is 'disabled'
Ambari-server daemon is configured to run under user 'root'. Change this 
setting [y/n] (n)?
Adjusting ambari-server permissions and ownership...
Checking iptables...
Checking JDK...
Do you want to change Oracle JDK [y/n] (n)? y
[1] Oracle JDK 1.7
[2] Oracle JDK 1.6
[3] - Custom JDK
==
Enter choice (1): 3
WARNING: JDK must be installed on all hosts and JAVA_HOME must be valid on all 
hosts.
WARNING: JCE Policy files are required for configuring Kerberos security. If 
you plan to use Kerberos,please make sure JCE Unlimited Strength Jurisdiction 
Policy Files are valid on all hosts.
Path to JAVA_HOME: /my/path/to/jdk

...



Thanks,


BR,

Dmytro Sen



From: Giovanni Paolo Gibilisco mailto:gibb...@gmail.com>>
Sent: Monday, February 16, 2015 11:22 AM
To: user@ambari.apache.org
Subject: Java version upgrade

Hi,
I'm trying to upgrade the version of Java used in the cluster in order to 
support Java 8. I've managed to install Java 8 and set the JAVA_HOME correctly 
on all the nodes on the cluster. I've restarted the services using ambari and 
even restarted the ambari server and agents but still wehn i subit a job using 
yarn i get an exception in my code saying
"Exception in thread "main" java.lang.UnsupportedClassVersionError: 
it/polimi/tssotn/dataloader/DataLoader : Unsupported major.minor version 52.0"
that basically means it is not running with Java 8.
Is there a way to tell ambari to confgure yarn (and all other services) so use 
the new jre?


Re: Problem with Ambari 1.7 recognizing hosts running CentOS 6

2014-12-17 Thread Jeff Sposetti
Hi David, Try sending in plain/text, not HTML.

On Wed, Dec 17, 2014 at 7:10 PM, David Novogrodsky <
david.novogrod...@gmail.com> wrote:
>
> I am having problems adding mor
> ​​
> information to this post:
> Delivery to the following recipient failed permanently:
>
>  user@ambari.apache.org
>
> Technical details of permanent failure:
> Google tried to deliver your message, but it was rejected by the server
> for the recipient domain ambari.apache.org by
> mx1.eu.apache.org.[192.87.106.230].
>
> The error that the other server returned was:
> 552 spam score (6.3) exceeded threshold
> (HTML_MESSAGE,LONGWORDS,RCVD_IN_DNSWL_LOW,SPF_PASS,SPOOF_COM2OTH,WEIRD_PORT
>
> David Novogrodsky
> david.novogrod...@gmail.com
> http://www.linkedin.com/in/davidnovogrodsky
>
> On Wed, Dec 17, 2014 at 1:12 PM, David Novogrodsky <
> david.novogrod...@gmail.com> wrote:
>>
>> The error from the registration log is as follows:
>> ==
>> Running setup agent script...
>> ==
>> Agent log at: /var/log/ambari-agent/ambari-
>> agent.log
>> ("WARNING 2014-12-17 10:43:08,349 NetUtil.py:92 - Server at
>> https://namenode .
>> localdomain.com.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode.namenode:8440
>> is not reachable, sleeping for 10 seconds...
>>
>> David Novogrodsky
>> david.novogrod...@gmail.com
>> http://www.linkedin.com/in/davidnovogrodsky
>>
>>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Hive Restart via ambari runs over changes made to hive-env.sh

2014-11-23 Thread Jeff Sposetti
You'll want to start using Ambari 1.7.0, where you can manage the content
of the -env.sh file as a configuration in Ambari. That way, your changes
are saved and versioned as part of the other Hive configs (and you no
longer have to make changes to the file locally).

Browse to Service > Hive > Configs and look for the section on hive-env.sh.

1.7.0 is just in the release vote but you can grab a build using the links
on this page:

https://cwiki.apache.org/confluence/display/AMBARI/Quick+Start+Guide



On Sun, Nov 23, 2014 at 5:59 AM, Daniel Haviv  wrote:

> Hi,
> I'm trying to add TEZ to the classpath by updaing the hive-env.sh in
> hive's conf dir but everytime I restart hive via Ambari the file gets
> reverted back.
>
> How can I avoid Ambari running over my changes?
>
> Thanks,
> Daniel
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: a couple API questions

2014-11-10 Thread Jeff Sposetti
On your Question #1:

/api/v1/services/AMBARI/components/AMBARI_SERVER

You'll see something like this:

"component_version" : "1.7.0",


On Mon, Nov 10, 2014 at 3:15 PM, Greg Hill  wrote:

>  1. Is there a way to query the API to see what version of ambari the
> server is running?  This would make auto-negotiation in the client easy, so
> it can automatically account for version differences.  If this doesn't
> exist, I can open a JIRA to have it added.  We have a dev on the team that
> is planning on doing some contributions to the server code soon.
> 2. Is there any documentation of user privileges?  Like, how do I add
> privileges to a new user via the API?  What privileges are possible to
> assign (maybe this is retrievable via a different URL)?
>
>  If there's an easier way to find this information, let me know so I can
> just look there in the future.  I can't seem to find any authoritative
> source in the code for what URLs exist and what parameters they expect, but
> maybe I just am searching for the wrong thing.
>
>  Thanks in advance.
>
>  Greg
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: how to install a specific version of HDP using Ambari

2014-11-05 Thread Jeff Sposetti
4-11-05 17:12:14,221 - Modifying user nagios
> 2014-11-05 17:12:14,232 - User['oozie'] {'gid': 'hadoop', 'ignore_failures': 
> False}
> 2014-11-05 17:12:14,232 - Modifying user oozie
> 2014-11-05 17:12:14,242 - User['hcat'] {'gid': 'hadoop', 'ignore_failures': 
> False}
> 2014-11-05 17:12:14,243 - Modifying user hcat
> 2014-11-05 17:12:14,252 - User['hcat'] {'gid': 'hadoop', 'ignore_failures': 
> False}
> 2014-11-05 17:12:14,253 - Modifying user hcat
> 2014-11-05 17:12:14,262 - User['hive'] {'gid': 'hadoop', 'ignore_failures': 
> False}
> 2014-11-05 17:12:14,262 - Modifying user hive
> 2014-11-05 17:12:14,272 - User['yarn'] {'gid': 'hadoop', 'ignore_failures': 
> False}
> 2014-11-05 17:12:14,272 - Modifying user yarn
> 2014-11-05 17:12:14,282 - Group['nobody'] {'ignore_failures': False}
> 2014-11-05 17:12:14,282 - Modifying group nobody
> 2014-11-05 17:12:14,305 - Group['nobody'] {'ignore_failures': False}
> 2014-11-05 17:12:14,305 - Modifying group nobody
> 2014-11-05 17:12:14,326 - User['nobody'] {'gid': 'hadoop', 'ignore_failures': 
> False, 'groups': [u'nobody']}
> 2014-11-05 17:12:14,326 - Modifying user nobody
> 2014-11-05 17:12:14,337 - User['nobody'] {'gid': 'hadoop', 'ignore_failures': 
> False, 'groups': [u'nobody']}
> 2014-11-05 17:12:14,337 - Modifying user nobody
> 2014-11-05 17:12:14,350 - User['hdfs'] {'gid': 'hadoop', 'ignore_failures': 
> False, 'groups': [u'hadoop']}
> 2014-11-05 17:12:14,350 - Modifying user hdfs
> 2014-11-05 17:12:14,366 - User['mapred'] {'gid': 'hadoop', 'ignore_failures': 
> False, 'groups': [u'hadoop']}
> 2014-11-05 17:12:14,368 - Modifying user mapred
> 2014-11-05 17:12:14,387 - User['zookeeper'] {'gid': 'hadoop', 
> 'ignore_failures': False}
> 2014-11-05 17:12:14,388 - Modifying user zookeeper
> 2014-11-05 17:12:14,405 - User['storm'] {'gid': 'hadoop', 'ignore_failures': 
> False, 'groups': [u'hadoop']}
> 2014-11-05 17:12:14,405 - Modifying user storm
> 2014-11-05 17:12:14,425 - User['falcon'] {'gid': 'hadoop', 'ignore_failures': 
> False, 'groups': [u'hadoop']}
> 2014-11-05 17:12:14,426 - Modifying user falcon
> 2014-11-05 17:12:14,446 - User['tez'] {'gid': 'hadoop', 'ignore_failures': 
> False, 'groups': [u'users']}
> 2014-11-05 17:12:14,446 - Modifying user tez
> 2014-11-05 17:12:14,576 - Package['falcon'] {}
> 2014-11-05 17:12:14,610 - Installing package falcon ('/usr/bin/yum -d 0 -e 0 
> -y install falcon')
> 2014-11-05 17:12:21,987 - Error while executing command 'install':
> Traceback (most recent call last):
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 111, in execute
> method(env)
>   File 
> "/var/lib/ambari-agent/cache/stacks/HDP/2.1/services/FALCON/package/scripts/falcon_client.py",
>  line 25, in install
> self.install_packages(env)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py",
>  line 167, in install_packages
> Package(name)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", 
> line 148, in __init__
> self.env.run()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 149, in run
> self.run_action(resource, action)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", 
> line 115, in run_action
> provider_action()
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py",
>  line 40, in action_install
> self.install_package(package_name)
>   File 
> "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py",
>  line 36, in install_package
> shell.checked_call(cmd)
>   File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", 
> line 35, in checked_call
> return _call(command, logoutput, True, cwd, env, preexec_fn, user, 
> wait_for_finish, timeout)
>   File "/usr

Re: how to install a specific version of HDP using Ambari

2014-11-04 Thread Jeff Sposetti
You are correct that Ambari will grab the latest HDP 2.1.x maintenance
release repos if you are connected to the internet (for it to check for the
latest) and you select stack HDP 2.1.

But if you want to install an older version of HDP 2.1.x, do the following:

1) During install, on the Select Stack page, select HDP 2.1
2) Expand the Advanced Repository Options section
3) Enter the Base URL for the HDP 2.1.x version you wish to install
(overwriting the 2.1.7.0 repo entries that show up by default)

Looking at the docs here:

http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.1.3/index.html

The Base URL for HDP is
http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.1.3.0 and
HDP-UTILS is
http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.17/repos/centos6



On Tue, Nov 4, 2014 at 8:04 PM, guxiaobo1982  wrote:

> Hi,
>
> The current GUI can let user choose major versions of HDP to install, such
> as 2.1, 2.0, and will install the latest minor version, such as 2.1.7, but
> how can I choose to install a specific minor version such as 2.1.3, since I
> found 2.1.7 may have some bugs about hive.
>
> Regards,
>
> Xiaobo gu
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: possible bug in the Ambari API

2014-11-03 Thread Jeff Sposetti
Greg, That's the /stacks2 API. Want to try with /stacks (which I think is
the preferred API resource)?

http://c6401.ambari.apache.org:8080/api/v1/stacks/HDP/versions/2.1/services/HBASE/configurations/content


[
  {
"href" : 
"http://c6401.ambari.apache.org:8080/api/v1/stacks/HDP/versions/2.1/services/HBASE/configurations/content";,
"StackConfigurations" : {
  "final" : "false",
  "property_description" : "Custom log4j.properties",
  "property_name" : "content",
  "property_type" : [ ],
  "property_value" : "\n# Licensed to the Apache Software
Foundation (ASF) under one\n# or more contributor license agreements.
See the NOTICE file\n# distributed with this work for additional
information\n# regarding copyright ownership.  The ASF licenses this
file\n# to you under the Apache License, Version 2.0 (the\n#
\"License\"); you may not use this file except in compliance\n# with
the License.  You may obtain a copy of the License at\n#\n#
http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by
applicable law or agreed to in writing, software\n# distributed under
the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n#
See the License for the specific language governing permissions and\n#
limitations under the License.\n\n\n# Define some default values that
can be overridden by system
properties\nhbase.root.logger=INFO,console\nhbase.security.logger=INFO,console\nhbase.log.dir=.\nhbase.log.file=hbase.log\n\n#
Define the root logger to the system property
\"hbase.root.logger\".\nlog4j.rootLogger=${hbase.root.logger}\n\n#
Logging Threshold\nlog4j.threshold=ALL\n\n#\n# Daily Rolling File
Appender\n#\nlog4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender\nlog4j.appender.DRFA.File=${hbase.log.dir}/${hbase.log.file}\n\n#
Rollver at midnight\nlog4j.appender.DRFA.DatePattern=.-MM-dd\n\n#
30-day 
backup\n#log4j.appender.DRFA.MaxBackupIndex=30\nlog4j.appender.DRFA.layout=org.apache.log4j.PatternLayout\n\n#
Pattern format: Date LogLevel LoggerName
LogMessage\nlog4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601}
%-5p [%t] %c{2}: %m%n\n\n# Rolling File Appender
properties\nhbase.log.maxfilesize=256MB\nhbase.log.maxbackupindex=20\n\n#
Rolling File 
Appender\nlog4j.appender.RFA=org.apache.log4j.RollingFileAppender\nlog4j.appender.RFA.File=${hbase.log.dir}/${hbase.log.file}\n\nlog4j.appender.RFA.MaxFileSize=${hbase.log.maxfilesize}\nlog4j.appender.RFA.MaxBackupIndex=${hbase.log.maxbackupindex}\n\nlog4j.appender.RFA.layout=org.apache.log4j.PatternLayout\nlog4j.appender.RFA.layout.ConversionPattern=%d{ISO8601}
%-5p [%t] %c{2}: %m%n\n\n#\n# Security audit
appender\n#\nhbase.security.log.file=SecurityAuth.audit\nhbase.security.log.maxfilesize=256MB\nhbase.security.log.maxbackupindex=20\nlog4j.appender.RFAS=org.apache.log4j.RollingFileAppender\nlog4j.appender.RFAS.File=${hbase.log.dir}/${hbase.security.log.file}\nlog4j.appender.RFAS.MaxFileSize=${hbase.security.log.maxfilesize}\nlog4j.appender.RFAS.MaxBackupIndex=${hbase.security.log.maxbackupindex}\nlog4j.appender.RFAS.layout=org.apache.log4j.PatternLayout\nlog4j.appender.RFAS.layout.ConversionPattern=%d{ISO8601}
%p %c: 
%m%n\nlog4j.category.SecurityLogger=${hbase.security.logger}\nlog4j.additivity.SecurityLogger=false\n#log4j.logger.SecurityLogger.org.apache.hadoop.hbase.security.access.AccessController=TRACE\n\n#\n#
Null 
Appender\n#\nlog4j.appender.NullAppender=org.apache.log4j.varia.NullAppender\n\n#\n#
console\n# Add \"console\" to rootlogger above if you want to use
this\n#\nlog4j.appender.console=org.apache.log4j.ConsoleAppender\nlog4j.appender.console.target=System.err\nlog4j.appender.console.layout=org.apache.log4j.PatternLayout\nlog4j.appender.console.layout.ConversionPattern=%d{ISO8601}
%-5p [%t] %c{2}: %m%n\n\n# Custom Logging
levels\n\nlog4j.logger.org.apache.zookeeper=INFO\n#log4j.logger.org.apache.hadoop.fs.FSNamesystem=DEBUG\nlog4j.logger.org.apache.hadoop.hbase=DEBUG\n#
Make these two classes INFO-level. Make them DEBUG to see more zk
debug.\nlog4j.logger.org.apache.hadoop.hbase.zookeeper.ZKUtil=INFO\nlog4j.logger.org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher=INFO\n#log4j.logger.org.apache.hadoop.dfs=DEBUG\n#
Set this class to log INFO only otherwise its OTT\n# Enable this to
get detailed connection error/retry logging.\n#
log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation=TRACE\n\n\n#
Uncomment this line to enable tracing on _every_ RPC call (this can be
a lot of 
output)\n#log4j.logger.org.apache.hadoop.ipc.HBaseServer.trace=DEBUG\n\n#
Uncomment the below if you want to remove logging of client region
caching'\n# and scan of .META. messages\n#
log4j.logger.org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation=INFO\n#
log4j.logger.org.apache.hadoop.hbase.client.MetaScanner=INFO\n\n",
  "service_name" : "HBASE",
  "stack_name" : "HDP",
  "stack_version" : "2.1",

Re: Can't add ganglia monitor

2014-09-17 Thread Jeff Sposetti
Looks like you are missing the x-requested-by header:

-H "X-Requested-By: ambari"


On Wed, Sep 17, 2014 at 11:49 AM, Charles Robertson
 wrote:
> Hi Jeff,
>
> The failure was attributable to PICNIC, so I don't think there's much the
> Ambari devs could do :)
>
> I've tried using the REST API, but I keep getting 400 bad request messages,
> and just can't see what's wrong:
>
>  curl -u {user:admin} -i -X DELETE http://{ambari
> server}:8080/api/v1/clusters/{my cluster name}/hosts/{slave node
> name}/host_components/GANGLIA_MONITOR
>
> Any advice?
>
> Thanks again,
> Charles
>
> On 17 September 2014 13:57, Jeff Sposetti  wrote:
>>
>> Ok. In Ambari 1.6.1, Ambari Web added the ability to add Ganglia Monitors
>> to hosts. So that's why you don't see it. REST API is fine or you can
>> upgrade to 1.6.1 as well.
>>
>> https://issues.apache.org/jira/browse/AMBARI-5530
>>
>> On a separate note: if you can file a JIRA on the failure you received
>> during "Add Host", that might be helpful in case there is a known issue or
>> something that needs attention.
>>
>> https://issues.apache.org/jira/browse/AMBARI
>>
>>
>>
>>
>> On Wed, Sep 17, 2014 at 8:54 AM, Charles Robertson
>>  wrote:
>>>
>>> Hi Jeff,
>>>
>>> Thanks for replying - I'm using Ambari 1.6.0. Ganglia Monitor is not
>>> available on +Add button. The wiki page is useful - there was a failure
>>> during adding the host, so I guess I need to decide what my 'favourite REST
>>> tool' is :)
>>>
>>> Thanks for your help,
>>> Charles
>>>
>>> On 17 September 2014 13:37, Jeff Sposetti  wrote:
>>>>
>>>> Which version of Ambari are you using?
>>>>
>>>> Browse to Hosts and then to the Host in question. Is Ganglia Monitor an
>>>> option on the "+ Add" button?
>>>>
>>>> BTW, checkout this wiki page. It has some info on adding components to
>>>> hosts (as part of adding/removing hosts from a cluster).
>>>>
>>>>
>>>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=40508365
>>>>
>>>>
>>>>
>>>> On Wed, Sep 17, 2014 at 4:16 AM, Charles Robertson
>>>>  wrote:
>>>>>
>>>>> Hi all,
>>>>>
>>>>> I have a node without ganglia monitor on it, and when I navigate to the
>>>>> host in Ambari it doesn't give me the option to add it to the host. This
>>>>> (seems) to be giving me warnings in Ganglia that it can't connect to 
>>>>> certain
>>>>> services. This also seems to be why Ambari isn't reporting disk space 
>>>>> usage
>>>>> for that node.
>>>>>
>>>>> How can I add the ganglia monitor to this node
>>>>>
>>>>> Thanks,
>>>>> Charles
>>>>
>>>>
>>>>
>>>> CONFIDENTIALITY NOTICE
>>>> NOTICE: This message is intended for the use of the individual or entity
>>>> to which it is addressed and may contain information that is confidential,
>>>> privileged and exempt from disclosure under applicable law. If the reader 
>>>> of
>>>> this message is not the intended recipient, you are hereby notified that 
>>>> any
>>>> printing, copying, dissemination, distribution, disclosure or forwarding of
>>>> this communication is strictly prohibited. If you have received this
>>>> communication in error, please contact the sender immediately and delete it
>>>> from your system. Thank You.
>>>
>>>
>>
>>
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader of
>> this message is not the intended recipient, you are hereby notified that any
>> printing, copying, dissemination, distribution, disclosure or forwarding of
>> this communication is strictly prohibited. If you have received this
>> communication in error, please contact the sender immediately and delete it
>> from your system. Thank You.
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Can't add ganglia monitor

2014-09-17 Thread Jeff Sposetti
Ok. In Ambari 1.6.1, Ambari Web added the ability to add Ganglia Monitors
to hosts. So that's why you don't see it. REST API is fine or you can
upgrade to 1.6.1 as well.

https://issues.apache.org/jira/browse/AMBARI-5530

On a separate note: if you can file a JIRA on the failure you received
during "Add Host", that might be helpful in case there is a known issue or
something that needs attention.

https://issues.apache.org/jira/browse/AMBARI




On Wed, Sep 17, 2014 at 8:54 AM, Charles Robertson <
charles.robert...@gmail.com> wrote:

> Hi Jeff,
>
> Thanks for replying - I'm using Ambari 1.6.0. Ganglia Monitor is not
> available on +Add button. The wiki page is useful - there was a failure
> during adding the host, so I guess I need to decide what my 'favourite REST
> tool' is :)
>
> Thanks for your help,
> Charles
>
> On 17 September 2014 13:37, Jeff Sposetti  wrote:
>
>> Which version of Ambari are you using?
>>
>> Browse to Hosts and then to the Host in question. Is Ganglia Monitor an
>> option on the "+ Add" button?
>>
>> BTW, checkout this wiki page. It has some info on adding components to
>> hosts (as part of adding/removing hosts from a cluster).
>>
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=40508365
>>
>>
>>
>> On Wed, Sep 17, 2014 at 4:16 AM, Charles Robertson <
>> charles.robert...@gmail.com> wrote:
>>
>>> Hi all,
>>>
>>> I have a node without ganglia monitor on it, and when I navigate to the
>>> host in Ambari it doesn't give me the option to add it to the host. This
>>> (seems) to be giving me warnings in Ganglia that it can't connect to
>>> certain services. This also seems to be why Ambari isn't reporting disk
>>> space usage for that node.
>>>
>>> How can I add the ganglia monitor to this node
>>>
>>> Thanks,
>>> Charles
>>>
>>
>>
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader
>> of this message is not the intended recipient, you are hereby notified that
>> any printing, copying, dissemination, distribution, disclosure or
>> forwarding of this communication is strictly prohibited. If you have
>> received this communication in error, please contact the sender immediately
>> and delete it from your system. Thank You.
>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Can't add ganglia monitor

2014-09-17 Thread Jeff Sposetti
Which version of Ambari are you using?

Browse to Hosts and then to the Host in question. Is Ganglia Monitor an
option on the "+ Add" button?

BTW, checkout this wiki page. It has some info on adding components to
hosts (as part of adding/removing hosts from a cluster).

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=40508365



On Wed, Sep 17, 2014 at 4:16 AM, Charles Robertson <
charles.robert...@gmail.com> wrote:

> Hi all,
>
> I have a node without ganglia monitor on it, and when I navigate to the
> host in Ambari it doesn't give me the option to add it to the host. This
> (seems) to be giving me warnings in Ganglia that it can't connect to
> certain services. This also seems to be why Ambari isn't reporting disk
> space usage for that node.
>
> How can I add the ganglia monitor to this node
>
> Thanks,
> Charles
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: question on [STACK]/[SERVICE]/metainfo.xml inheritance rules

2014-09-05 Thread Jeff Sposetti
Captured and started this wiki page:

https://cwiki.apache.org/confluence/display/AMBARI/FAQ#

On Fri, Sep 5, 2014 at 9:54 AM, Sumit Mohanty  wrote:
> Could we save these as FAQs on the Ambari wiki?
>
> -Sumit
>
>
> On Thu, Sep 4, 2014 at 5:53 PM, Siddharth Wagle 
> wrote:
>>
>> Hi Alex,
>>
>> Replies inline.
>>
>> 1. If a component exists in the parent stack and is defined again in the
>> child stack with just a few attributes, are these values just to override
>> the parent's values or the whole component definition is replaced.
>>
>> We go property by property and merge them from parent to child. So if you
>> remove a category for example from the child it will be inherited from
>> parent, that goes for pretty much all properties.
>> So, the question is how do we tackle existence of a property in both
>> parent and child. Here, most of the decision still follow same paradigm as
>> take the child value instead of parent and every property in parent, not
>> explicitly deleted from child using a marker like  tag, is included
>> in the merge.
>>
>> - For config-dependencies, we take all or nothing approach, if this
>> property exists in child use it and all of its children else take it from
>> parent.
>> - The custom commands are merged based on names, such that merged
>> definition is a union of commands with child commands with same name
>> overriding those fro parent.
>> - Cardinality is overwritten by a child or take from the parent if child
>> has not provided one.
>>
>> You could look at this method for more details:
>> org.apache.ambari.server.api.util.StackExtensionHelper#mergeServices
>>
>> 2. If a component is missing in the new definition but is present in the
>> parent, does it get inherited ?
>>
>> Generally yes.
>>
>> 3. Configuration dependencies for the service -- are they overwritten or
>> merged ?
>>
>> Overwritten.
>>
>> 4. What about other elements in metainfo.xml -- which rules apply ?
>>
>> Answered in 1.
>>
>> -Sid
>>
>>
>>
>>
>>
>>
>> On Thu, Sep 4, 2014 at 5:02 PM, Alexander Denissov 
>> wrote:
>>>
>>> I am trying to understand the inheritance rules that govern services
>>> metainfo.xml file contents. I looked at
>>> https://issues.apache.org/jira/browse/AMBARI-2819 but it didn't answer the
>>> following:
>>>
>>> 1. If a component exists in the parent stack and is defined again in the
>>> child stack with just a few attributes, are these values just to override
>>> the parent's values or the whole component definition is replaced.
>>>
>>> Example: HDP-2.1 YARN/metainfo.xml contains definition of RESOURCEMANAGER
>>> with just 4 attributes, out of which only the value for "cardinality" is
>>> different one in HDP-2.0.6 definition. But 2.0.6 definition also has a lot
>>> more attributes (such as custom commands) that are not mentioned in 2.1.
>>> Will these "missing" attributes be inherited by 2.1 stack ? If yes, why
>>> other attributes (category and configuration-dependencies) are defined again
>>> with the same values instead of being inherited ?
>>>
>>> 2. If a component is missing in the new definition but is present in the
>>> parent, does it get inherited ?
>>>
>>> 3. Configuration dependencies for the service -- are they overwritten or
>>> merged ?
>>>
>>> Example: HDP-2.1 YARN/metainfo.xml contains 
>>> element with 4 , where as in HDP-2.0.6 the same element has 5
>>>  (extra line is mapred-site). So will
>>> mapred-site be inherited and present in 2.1
>>> definition or was this the way to get rid of this specific line for the new
>>> stack ?
>>>
>>> 4. What about other elements in metainfo.xml -- which rules apply ?
>>>
>>> --
>>> Thanks,
>>> Alex.
>>
>>
>>
>> CONFIDENTIALITY NOTICE
>> NOTICE: This message is intended for the use of the individual or entity
>> to which it is addressed and may contain information that is confidential,
>> privileged and exempt from disclosure under applicable law. If the reader of
>> this message is not the intended recipient, you are hereby notified that any
>> printing, copying, dissemination, distribution, disclosure or forwarding of
>> this communication is strictly prohibited. If you have received this
>> communication in error, please contact the sender immediately and delete it
>> from your system. Thank You.
>
>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader of
> this message is not the intended recipient, you are hereby notified that any
> printing, copying, dissemination, distribution, disclosure or forwarding of
> this communication is strictly prohibited. If you have received this
> communication in error, please contact the sender immediately and delete it
> from your system. Thank You.

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain 

Re: RestAPI for pushing a configuration to a component in 1.2.4

2014-07-29 Thread Jeff Sposetti
You can also checkout this wiki page on Ambari Stacks. The "Custom Client
Service" example might be worth looking at...

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=38571133


On Tue, Jul 29, 2014 at 4:11 PM, Siddharth Wagle 
wrote:

> Correct URL:
>
>
> https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HDFS/package/scripts/hdfs.py
>
> -Sid
>
>
> On Tue, Jul 29, 2014 at 1:07 PM, Siddharth Wagle 
> wrote:
>
>> You can create a config and attach it to any service and thereby a host
>> component. The config will be sent to the host on any execution command,
>> example: START/STOP.
>>
>> However, you would need to add a small snippet of code (4 lines) for the
>> configuration to be applied on the host. Take a look at the XMLConfig
>> element here,
>> https://github.com/hortonworks/ambari/blob/apache-ref/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/HDFS/package/scripts/hdfs.py
>>
>> The the automated mechanism is a good idea. Maybe you can open a Jira for
>> the same. Although, it is probably easier to change the stack definition
>> for a quicker resolution if so desired.
>>
>> -Sid
>>
>> On Tue, Jul 29, 2014 at 12:42 PM, Aaron Cody 
>> wrote:
>>
>>>
>>>  is there any mechanism available (preferably a REST call) to push an
>>> arbitrary file out to a host or hosts in the cluster?
>>>
>>
>>
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity
> to which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Ambari-agent installation (started from UI) failed on SUSE 11 sp3

2014-07-25 Thread Jeff Sposetti
>From the Ambari 1.6.1 release notes, this is a known issue.

*BUG-20060:* Host registration fails during Agent bootstrap on SLES due to
timeout.

http://docs.hortonworks.com/HDPDocuments/Ambari-1.6.1.0/bk_releasenotes_ambari_1.6.1/content/ch_relnotes-ambari-1.6.1.0-knownissues.html


On Fri, Jul 25, 2014 at 10:00 PM, Zongheng Yang 
wrote:

> Hmm, not sure if the following behavior causes the issue. After
> manually killing the two stuck processes, I ran "zypper -q search -s
> --match-exact ambari-agent", and it prompts for user interaction:
>
> New repository or package signing key received:
> Key ID: F5113243C66B6EAE
> Key Name: NVIDIA Corporation 
> Key Fingerprint: 9B763D49D8A5C892FC178BACF5113243C66B6EAE
> Key Created: Thu 15 Jun 2006 04:13:18 PM UTC
> Key Expires: (does not expire)
> Repository: nVidia-Driver-SLE11-SP3
>
> Do you want to reject the key, trust temporarily, or trust always?
> [r/t/a/?] (r):
>
> Is this the expected behavior? It seems to be the Ambari installation
> process should be able to automate all command-line processes and
> requires user attention only on the UI.
>
> On Fri, Jul 25, 2014 at 6:57 PM, Zongheng Yang 
> wrote:
> > Hi user@ambari,
> >
> > I am trying to use Ambari 1.6.1 to install HDP 2.1 on a two-machine
> > cluster (EC2 r3.large instances, SUSE Linux Enterprise Server 11 sp3).
> > I have successfully installed and started the Ambari server, and in
> > the web UI's "Confirm Hosts" step, the installation failed.
> >
> > Here's the log for one of the hosts: http://pastebin.com/6JMMbaDL
> >
> > Doing a "ps aux | grep zypper" on that host gives:
> >
> > ip-172-31-18-208:~ # ps aux | grep zypper
> > root 19622  0.0  0.0  11584  1452 pts/1S+   01:41   0:00 bash
> > -c zypper -q search -s --match-exact ambari-agent | grep ambari-agent
> > | sed -re 's/\s+/ /g' | cut -d '|' -f 4 | tr '\n' ', ' | sed -s
> > 's/[-|~][A-Za-z0-9]*//g'
> > root 19623  0.5  0.0 113112 12208 pts/1S+   01:41   0:00
> > zypper -q search -s --match-exact ambari-agent
> > root 20135  0.0  0.0   5716   820 pts/0S+   01:41   0:00 grep
> zypper
> >
> > The top two processes seem to be long-lived/stuck. And `zypper install
> > ambari-agent` on that host gives:
> >
> > System management is locked by the application with pid 19623 (zypper).
> > Close this application before trying again.
> >
> > I am wondering, what's the root cause here, and how can I resolve this
> > issue? I am happy to provide more details if that's helpful for
> > diagnosis.
> >
> > Cheers,
> > Zongheng
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: new error

2014-07-18 Thread Jeff Sposetti
There is work underway in trunk (to be delivered in Ambari 1.7.0) that will
migrate the "global" config-type to "-env.sh" config-type for each service.

The idea is before 1.7.0 ships, Ambari maintains "global" for backwards
compat (that's not there yet, but the -env work has begun), and then
removes support for "global" completely in a future release.


On Fri, Jul 18, 2014 at 2:51 PM, Greg Hill  wrote:

>  Oh, I see the key changed from 'global' to 'nagios-env'.  Were the docs
> just wrong and it was escaping detection previously or did this
> backwards-incompatible change get purposely made in a point release?
>
>  Greg
>
>   From: Greg 
> Reply-To: "user@ambari.apache.org" 
> Date: Friday, July 18, 2014 1:43 PM
> To: "user@ambari.apache.org" 
> Subject: new error
>
>   I was revisiting my Ambari API testing with the most recent changes and
> the code that worked a few weeks ago no longer is.  Apparently the way that
> is shown in the docs to define the nagios_contact no longer works.
>
>  I sent this in my blueprint:
>
>  "configurations": [
> {
> "global" : {
> "nagios_contact" : "greg.h...@rackspace.com"
> }
> }
> ],
>
>  But when I attempt to create that blueprint, I get a 400 error with the
> following message:
>
>  Required configurations are missing from the specified host groups:
> {gateway={nagios-env=[nagios_contact]}}
>
>  ('gateway' is the name of one of my host_groups).  This error did not
> occur as of the latest code 3-4 weeks ago.  What do I need to be doing
> differently?
>
>  Greg
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: All processes are waiting during Cluster install

2014-07-13 Thread Jeff Sposetti
Try changing that value per this note in the docs.

http://docs.hortonworks.com/HDPDocuments/Ambari-1.6.0.0/bk_releasenotes_ambari_1.6.0/content/ch_relnotes-ambari-1.6.0.0-behve-changes.html

On Sunday, July 13, 2014, Suraj Nayak M  wrote:

> Ravi,
>
> Yes, its set in yarn-site.xml, it is set as below :
>
> 
>   yarn.timeline-service.store-class
> org.apache.hadoop.yarn.server.applicationhistoryservice.
> timeline.LeveldbTimelineStore
> 
>
> What is the problem ?
>
> --
> Suraj Nayak
>
> On Monday 14 July 2014 03:22 AM, Ravi Mutyala wrote:
>
>> see if yarn.timeline-service.store-class is set to
>> org.apache.hadoop.yarn.server.applicationhistoryservice.timeline.LeveldbTimelineStore
>> in yarn-site.xml (and in config on webui).
>>
>> Should have been set by ambari, if not could be a bug.
>>
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Ambari UI not recovering after Machine power failure

2014-07-09 Thread Jeff Sposetti
Can you confirm ambari server is running? And stop/start as necessary?

ambari-server status

Also, confirm postgres is running after you stop/start ambari-server?

ps -ef | grep postgres
service postgresql status



On Wed, Jul 9, 2014 at 8:49 AM, Suraj Nayak  wrote:

> Hi,
>
> I have a 3 node cluster with HA enabled via Ambari. Accidentally the power
> supply to all the nodes was disrupted. After the power was restored, the 3
> servers were turned on. Now if I try to login to Ambari Web UI, an error
> pops up  *Error in retrieving web client state from ambari server.*(After
> entering username and password)
>
> It doesn't show the cluster dashboard which was showing before. Instead it
> shows *"Cluster Install Wizard" *to setup a new cluster. How can I
> recover from the error? I do not want to restart the installation process
>
> Below is the error I found in */var/log/ambari-server/ambari-server.log *log
> file.
>
> I found an email archive at
> http://mail-archives.apache.org/mod_mbox/ambari-user/201402.mbox/%3ccf1bc3ed.2e43f%25are9...@nyp.org%3E.
> But I need suggestions on recovering but not starting the installation from
> scratch.
>
> *Error :*
>
> Local Exception Stack:
> Exception [EclipseLink-4002] (Eclipse Persistence Services -
> 2.4.0.v20120608-r11652):
> org.eclipse.persistence.exceptions.DatabaseException
> Internal Exception: org.postgresql.util.PSQLException: ERROR: missing
> chunk number 0 for toast value 50294 in pg_toast_16548
> Error Code: 0
> Call: SELECT "key", "value" FROM key_value_store WHERE ("key" = ?)
> bind => [1 parameter bound]
> Query: ReadObjectQuery(name="readObject" referenceClass=KeyValueEntity
> sql="SELECT "key", "value" FROM key_value_store WHERE ("key" = ?)")
> at
> org.eclipse.persistence.exceptions.DatabaseException.sqlException(DatabaseException.java:333)
> at
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.basicExecuteCall(DatabaseAccessor.java:646)
> at
> org.eclipse.persistence.internal.databaseaccess.DatabaseAccessor.executeCall(DatabaseAccessor.java:537)
> at
> org.eclipse.persistence.internal.sessions.AbstractSession.basicExecuteCall(AbstractSession.java:1800)
> at
> org.eclipse.persistence.sessions.server.ServerSession.executeCall(ServerSession.java:566)
> at
> org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:207)
> at
> org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.executeCall(DatasourceCallQueryMechanism.java:193)
> at
> org.eclipse.persistence.internal.queries.DatasourceCallQueryMechanism.selectOneRow(DatasourceCallQueryMechanism.java:668)
> at
> org.eclipse.persistence.internal.queries.ExpressionQueryMechanism.selectOneRowFromTable(ExpressionQueryMechanism.java:2744)
> at
> org.eclipse.persistence.internal.queries.ExpressionQueryMechanism.selectOneRow(ExpressionQueryMechanism.java:2697)
> at
> org.eclipse.persistence.queries.ReadObjectQuery.executeObjectLevelReadQuery(ReadObjectQuery.java:452)
> at
> org.eclipse.persistence.queries.ObjectLevelReadQuery.executeDatabaseQuery(ObjectLevelReadQuery.java:1149)
> at
> org.eclipse.persistence.queries.DatabaseQuery.execute(DatabaseQuery.java:852)
> at
> org.eclipse.persistence.queries.ObjectLevelReadQuery.execute(ObjectLevelReadQuery.java:1108)
> at
> org.eclipse.persistence.queries.ReadObjectQuery.execute(ReadObjectQuery.java:420)
> at
> org.eclipse.persistence.queries.ObjectLevelReadQuery.executeInUnitOfWork(ObjectLevelReadQuery.java:1196)
> at
> org.eclipse.persistence.internal.sessions.UnitOfWorkImpl.internalExecuteQuery(UnitOfWorkImpl.java:2875)
> at
> org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1602)
> at
> org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1584)
> at
> org.eclipse.persistence.internal.sessions.AbstractSession.executeQuery(AbstractSession.java:1535)
> at
> org.eclipse.persistence.internal.jpa.EntityManagerImpl.executeQuery(EntityManagerImpl.java:838)
> at
> org.eclipse.persistence.internal.jpa.EntityManagerImpl.findInternal(EntityManagerImpl.java:778)
> at
> org.eclipse.persistence.internal.jpa.EntityManagerImpl.find(EntityManagerImpl.java:671)
> at
> org.eclipse.persistence.internal.jpa.EntityManagerImpl.find(EntityManagerImpl.java:543)
> at
> org.apache.ambari.server.orm.dao.KeyValueDAO.findByKey(KeyValueDAO.java:42)
> at
> org.apache.ambari.server.orm.AmbariLocalSessionInterceptor.invoke(AmbariLocalSessionInterceptor.java:53)
> at
> org.apache.ambari.server.api.services.PersistKeyValueImpl.getValue(PersistKeyValueImpl.java:50)
> at
> org.apache.ambari.server.api.services.PersistKeyValueService.getKey(PersistKeyValueService.java:83)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorI

Re: Plugins Support to extend Apache Ambari Functionality

2014-07-09 Thread Jeff Sposetti
Hi,

Yes, on Question #2, the options are PUPPET or PYTHON. Although all the
Stacks that currently exist are using PYTHON.

For Question #1, extending a stack does not require a re-build/re-deploy of
Ambari to get your service available via the Ambari API. The example shows
that after you put your custom service in place on the Ambari Server,
perform an ambari-server restart, which causes the new service (well, all
the stacks) to be packaged. The agents then recognize the new stack package
and pull it down so they have the latest.


On Wed, Jul 9, 2014 at 4:15 AM, Suraj Nayak M  wrote:

>  For my 2nd question below regarding the scriptType, the documentation
> states only possible types are PYTHON and PUPPET.
>
> On Tuesday 08 July 2014 10:44 PM, Suraj Nayak M wrote:
>
> Jeff,
>
> Thanks for the link and providing the information regarding Stack.
>
>- Does extending a Stack and adding Custom Service is isolated from
>Ambari Core ? Or does it need a re build and re-deploy of Ambari ?
> - I saw the Confluence page which describes the structure of the
>stack in detail along with an example of *Implementing a Custom Client
>Service*. Am curious about the below code :
>
> PYTHON
>
> Which are the other languages supported currently apart from Python ?
>
> --
> Thanks
> Suraj Nayak
>
> On Tuesday 08 July 2014 05:53 PM, Jeff Sposetti wrote:
>
>  For that, you'll want to look at extending a Stack and adding a Custom
> Service. This allows you to define how to control install, configure,
> start, stop of components of a Custom Service in the cluster. Once that
> Service is in place, then you can use a View to execute the Ambari REST API
> to perform the operations.
>
>  Checkout this wiki page for more info.
>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=38571133
>
>
> On Tue, Jul 8, 2014 at 4:10 AM, Suraj Nayak M  wrote:
>
>>  Yusaku,
>>
>> Thanks for your answers. It saved lot of my time. Appreciate your help!
>>
>> Regarding the question *"Will I be able to install any custom agents via
>> Plugins/View ?"* :
>>
>> Lets consider the use case of Kafka Broker monitoring in Server2 and
>> Server3 while Ambari Server is running in Server1. This require agents to
>> be installed *(let me know if this is not the right way. Any suggestions
>> welcome)* in Server2 and Server3 for sending heartbeat information such
>> as its health, statistics, etc., . Thus a custom agents need to be run on
>> Server2 and Server3. Can I deploy these agents to Server2 and Server3 via
>> Custom View (From the Web UI by selecting Server2 and Server3 for agent
>> Installation) ?
>>
>> Am trying to achieve the following:
>>
>>- Admin should be easily deploy the custom agent on the servers in
>>the cluster and monitor the health of the custom process which is running
>>on those servers.
>> - Restart the failed process.
>>- Check the log files created by the process in the remote Server
>>from the web UI.
>>
>> --
>> Suraj Nayak
>>
>> On Tuesday 08 July 2014 02:41 AM, Yusaku Sako wrote:
>>
>> Suraj,
>>
>> Please see my answers inline:
>>
>>
>> On Mon, Jul 7, 2014 at 12:41 PM, Suraj Nayak M  
>>  wrote:
>>
>>  Yusaku,
>>
>> Thanks for sharing the link. This clarifies many of my doubts, especially 
>> regarding deployment of Views.
>>
>> Just a couple of more questions :
>>
>> 1. Ambari uses EmberJS (correct me if wrong), will it support AngularJS 
>> plugin development ? As AngularJS is also Client side MVC framework like 
>> EmberJS
>>
>>  You can use any client-side JS framework you'd like to develop new
>> Views, including AngularJS.  Views are isolated from Ambari Web core
>> (which is written in Ember.js)
>>
>>
>>  2. Can I use HTTPS support ? I found Ambari Web supports HTTPS here.
>>
>>  Yes, Ambari Server (and Views) can be configured to use HTTPS.
>>
>>
>>  3. Any limitations for plugin/view development that I should take care of? 
>> Like below :
>>
>> Any stability issue (if any), as i see here in the documentation "This 
>> capability is currently under development".
>>
>>  Yes, expect some level of changes as far as details go on the Views
>> framework and API.
>>
>>
>>  Will I be able to install any custom agents via Plugins/View ?
>>
>>  Can you clarify what you mean by this (and use cases)?
>>
>>
>>  It is very early for me to ask the Question 3 as I have not done much 
>

Re: Plugins Support to extend Apache Ambari Functionality

2014-07-08 Thread Jeff Sposetti
For that, you'll want to look at extending a Stack and adding a Custom
Service. This allows you to define how to control install, configure,
start, stop of components of a Custom Service in the cluster. Once that
Service is in place, then you can use a View to execute the Ambari REST API
to perform the operations.

Checkout this wiki page for more info.

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=38571133


On Tue, Jul 8, 2014 at 4:10 AM, Suraj Nayak M  wrote:

>  Yusaku,
>
> Thanks for your answers. It saved lot of my time. Appreciate your help!
>
> Regarding the question *"Will I be able to install any custom agents via
> Plugins/View ?"* :
>
> Lets consider the use case of Kafka Broker monitoring in Server2 and
> Server3 while Ambari Server is running in Server1. This require agents to
> be installed *(let me know if this is not the right way. Any suggestions
> welcome)* in Server2 and Server3 for sending heartbeat information such
> as its health, statistics, etc., . Thus a custom agents need to be run on
> Server2 and Server3. Can I deploy these agents to Server2 and Server3 via
> Custom View (From the Web UI by selecting Server2 and Server3 for agent
> Installation) ?
>
> Am trying to achieve the following:
>
>- Admin should be easily deploy the custom agent on the servers in the
>cluster and monitor the health of the custom process which is running on
>those servers.
> - Restart the failed process.
>- Check the log files created by the process in the remote Server from
>the web UI.
>
> --
> Suraj Nayak
>
> On Tuesday 08 July 2014 02:41 AM, Yusaku Sako wrote:
>
> Suraj,
>
> Please see my answers inline:
>
>
> On Mon, Jul 7, 2014 at 12:41 PM, Suraj Nayak M  
>  wrote:
>
>  Yusaku,
>
> Thanks for sharing the link. This clarifies many of my doubts, especially 
> regarding deployment of Views.
>
> Just a couple of more questions :
>
> 1. Ambari uses EmberJS (correct me if wrong), will it support AngularJS 
> plugin development ? As AngularJS is also Client side MVC framework like 
> EmberJS
>
>  You can use any client-side JS framework you'd like to develop new
> Views, including AngularJS.  Views are isolated from Ambari Web core
> (which is written in Ember.js)
>
>
>  2. Can I use HTTPS support ? I found Ambari Web supports HTTPS here.
>
>  Yes, Ambari Server (and Views) can be configured to use HTTPS.
>
>
>  3. Any limitations for plugin/view development that I should take care of? 
> Like below :
>
> Any stability issue (if any), as i see here in the documentation "This 
> capability is currently under development".
>
>  Yes, expect some level of changes as far as details go on the Views
> framework and API.
>
>
>  Will I be able to install any custom agents via Plugins/View ?
>
>  Can you clarify what you mean by this (and use cases)?
>
>
>  It is very early for me to ask the Question 3 as I have not done much 
> findings in that area. But am curious to know.
>
>  Yusaku
>
>
>  On Tuesday 08 July 2014 12:09 AM, Yusaku Sako wrote:
>
> Hi Suraj,
>
> You might also want to take a look at 
> http://www.slideshare.net/hortonworks/ambari-views-overview for an overview 
> of Ambari Views.
>
> Yusaku
>
>
> On Mon, Jul 7, 2014 at 6:27 AM, Suraj Nayak M  
>  wrote:
>
>  Thanks for the link Dmitry.
>
> Yes, I was looking for similar capability. I will go through the 
> documentation and also examples.
>
> On Monday 07 July 2014 05:28 PM, Dmitry Sen wrote:
>
> Hi,
>
> Apache Ambari supports custom plug-in UI capabilities. I think that's what 
> you're looking for https://cwiki.apache.org/confluence/display/AMBARI/Views
>
>
>
>
> On Mon, Jul 7, 2014 at 12:08 PM, ÐΞ€ρ@Ҝ (๏̯͡๏)  
>  wrote:
>
>  +1
> Kafka cluster or druid cluster or custom java processes
>
>
> On Mon, Jul 7, 2014 at 12:46 PM, Suraj Nayak M  
>  wrote:
>
>  Hi Everyone,
>
> Does Apache Ambari support custom plugin development, by which one can extend 
> Ambari functionality ? Are there any custom plugins already out there ?
>
> Custom functionality might be following :
>
> Install, Monitor, Start, Stop and Restart of Custom Java processes (Example : 
> Kafka OR In-House Java Frameworks or tools which run on distributed mode)
> Creating custom UI to monitor the above added Custom Java Processes.
>
> --
> Thanks & Regards
> Suraj Nayak M
>
>
>
>  --
> Deepak
>
>
>  --
> BR,
> Dmitry Sen
>
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to 
> which it is addressed and may contain information that is confidential, 
> privileged and exempt from disclosure under applicable law. If the reader of 
> this message is not the intended recipient, you are hereby notified that any 
> printing, copying, dissemination, distribution, disclosure or forwarding of 
> this communication is strictly prohibited. If you have received this 
> communication in error, please contact the sender immediately and delete it 
> from your system. Thank You.
>
>
>
>  CONFIDENTIALITY NOT

Re: Adding services (Hive/HBase) after initial installation

2014-07-04 Thread Jeff Sposetti
Hi, what version of ambari are you using with HDP 2.1?

ambari-server --version

or in Ambari Web, under the user dropdown > About dialog.

Thanks.



On Fri, Jul 4, 2014 at 10:06 AM, Michael Moss 
wrote:

> Hi. I'm curious if anyone is able to reproduce this.
>
> I'm on a Centos6.5/Openstack VM with the latest HDP2.1. The machine was a
> little light on resources so I installed it without Hive or HBase. I later
> wanted to do a POC with the stinger stuff so attempted to "add service"
> through Ambari.
>
> When attempting to add Hive to this existing installation, it requires me
> to included HDFS, YARN/MR2, Tez, Zookeeper (despite these all already being
> installed). When arriving to the “Install, Start and Test” portion of the
> Add Service Wizard, I get the error:
>
> “Attempted to create services which already exist; ,clusterName=GoBIg
> serviceNames=TEZ, ZOOKEEPER,MAPREDUCE2,HDFS, YARN”
>
> As far as I can tell, there is no way to install Hive through Ambari after
> the initial installation has already been done.
>
> Thanks,
>
> Mike
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: How to add configurations to specific host to create cluster using blueprint ?

2014-06-05 Thread Jeff Sposetti
You can set configurations at the blueprint level (to apply to all
host_groups) and you can also set configurations in the host_groups element
to set configurations for that host_group.

Will that help? Or will it just mean you will have to have a host_group
that you have to apply per host?

https://cwiki.apache.org/confluence/display/AMBARI/Blueprints#Blueprints-Configurations


On Thu, Jun 5, 2014 at 12:58 AM, Qing Chi 79624  wrote:

>  Hi guys,
> I want to use specific data dir on each host. For example:
>
>  Host 1, following is content of hdfs-site.xml on a host.
>
>  
> 
> 
> 
>  …
>  
>   dfs.data.dir
>
> /mnt/scsi-36000c297241e2328f0c26ccf28525ada-part1/hadoop/hdfs/data,/mnt/scsi-36000c296928f65c7749de0620b422dde-part1/hadoop/hdfs/data,/mnt/scsi-36000c2970047fad07f4662de91001b8e-part1/hadoop/hdfs/data
>   true
> 
>  ….
> 
>
>  Host 2,  following is content of hdfs-site.xml on another host.
>
>   
> 
> 
> 
>  …
>   
>   dfs.data.dir
>
> /mnt/scsi-36000c2966300dabe41c0193161330a2c-part1/hadoop/hdfs/data,/mnt/scsi-36000c29ce17ee6c47e1a329791cbbe61-part1/hadoop/hdfs/data,/mnt/scsi-36000c29694c370eb163114067ffed1bb-part1/hadoop/hdfs/data
>   true
> 
>  ….
> 
>
>  Values of data dir on all hosts are different. So on this situation
> how to add configurations to specific host to create cluster using
> blueprint? Can somebody give me some help?
>
>  Thanks,
>
>  -qing
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Adding host to cluster

2014-06-04 Thread Jeff Sposetti
Did you make any modifications to your ssh setup? Can you confirm your SSH
setup and also confirm (from command line) that from Ambari Server to the
new machines can be reached via SSH?

http://docs.hortonworks.com/HDPDocuments/Ambari-1.6.0.0/bk_using_Ambari_book/content/ambari-chap1-5-2.html


On Wed, Jun 4, 2014 at 1:51 AM, Alberto Ayora 
wrote:

> Hello,
>
> after installing a single-node cluster with ambari 1.6.0. Everything was
> ok and is running. But, after that, I have tried to add a new host to the
> cluster and I found this error:
>
> ==
> Copying common functions script...
> ==
>
> Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
> lost connection
> scp /usr/lib/python2.6/site-packages/common_functions
> host=hcchcadp2.sergas.local, exitcode=1
>
> ERROR: Bootstrap of host hcchcadp2.sergas.local fails because previous action 
> finished with non-zero exit code (1)
> ERROR MESSAGE: Permission denied 
> (publickey,gssapi-keyex,gssapi-with-mic,password).
> lost connection
>
> STDOUT:
> Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
> lost connection
>
>
> The question is that I can reach this host by SSH without password from
> host-master.
>
> Does anybody have any idea if I am doing something wrong?
>
> Thank you.
>
> --
> Alberto Ayora Pais (Calling Card
> )
>
> **
>
> **
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Questions about ambari REST API

2014-06-02 Thread Jeff Sposetti
Hi,

How did you create your cluster? Did you use the Ambari Blueprints API (and
are using the Ambari 1.6.0 release)?

https://cwiki.apache.org/confluence/display/AMBARI/Blueprints

J

On Mon, Jun 2, 2014 at 8:46 PM, Qing Chi 79624  wrote:

>  Hello guys,
>
>  I'm working on provisioning cluster through Ambari now using REST API.
> And I have some questions about ambari REST API.
>  Questions,:
>
>1. Can only manage a cluster in Ambari?
>2. How to install packages of service using ambari REST API?
>3. Why the cluster which created by REST API would disappear after
>create another cluster using ambari Web UI? Steps:
>
>  a)  Create a cluster named "apache" using REST API and add hosts,
> services, components and configurations to the cluster.
>  b)  Create another cluster named "test" using ambari Web UI, and
>  persist it.
>  c)  open this URL http://10.111.88.146:8080/api/v1/clusters on browser.
> Following is result:
>  {
>  "href" : "http://10.111.88.146:8080/api/v1/clusters";,
>  "items" : [
>  {
>  "href" : "http://10.111.88.146:8080/api/v1/clusters/test";,
>  "Clusters" : {
>  "cluster_name" : "test",
>  "version" : "HDP-2.1"
>  }
>  }
>  ]
>  }
>  I am confused where is the cluster named "apache". But I rerun the
> command to create a cluster named "apache". It will give me the
> message("Attempted to create a Cluster which already exists,
> clusterName=apache").
>  Command:
>  curl -H "X-Requested-By: ambari" -X POST -d "@cluster" --user
> "admin:admin" -i http://10.141.73.168:8080/api/v1/clusters/apache
>  {
>  "Clusters":
>   {
>  "cluster_name": "apache",
>   "version":"HDP-2.0"
>   }
>  }
>  Response:
>  HTTP/1.1 409 Conflict
>  Set-Cookie: AMBARISESSIONID=1kcj0ocdogqm9ggvn4m2ekbng;Path=/
>  Expires: Thu, 01 Jan 1970 00:00:00 GMT
>  Content-Type: text/plain
>  Content-Length: 108
>  Server: Jetty(7.6.7.v20120910)
>  {
>  "status" : 409,
>  "message" : "Attempted to create a Cluster which already exists,
> clusterName=apache"
>  }
>
>  Can you give me some helps for these questions?
>
>  Thanks,
>
>  -qing
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Mirror server is not accessible from Ambari Web UI

2014-05-30 Thread Jeff Sposetti
Is it failing when Ambari Web attempts to validate the Base URLs? Are you
using a proxy?

http://docs.hortonworks.com/HDPDocuments/Ambari-1.6.0.0/bk_ambari_reference/content/Configure-Ambari-Server-for-Internet-Proxy.html


On Fri, May 30, 2014 at 5:10 AM, Meghavi Sugandhi 
wrote:

>
> Hi,
>
> While installing hadoop from ambari web ui, we are facing a issue at
> select stack stage.
> We have created mirror server and it is accessible from web browser but
> Ambari Web UI is not able to access it.
>
> Please provide solution for the same.
>
> Thanks & Regards,
> Meghavi Sugandhi
>
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: HDP 2.1.1 API install with Ambari 1.5 fails to start YARN

2014-03-28 Thread Jeff Sposetti
Ramya, if you are up for it, want to try installing with the Blueprints
API? Simplifies install vs. using the multiple Install API calls...

https://cwiki.apache.org/confluence/display/AMBARI/Blueprints


On Fri, Mar 28, 2014 at 4:35 PM, Ramya Manjunath wrote:

> The script modifies many configuration parameters and some are specific to
> our environment.
>
> I basically started with
> https://github.com/apache/ambari/blob/trunk/ambari-server/src/test/resources/deploy_HDP2.shand
>  added the new services and modified the configs.
>
> Thanks,
> Ramya
>
>
>
> On Fri, Mar 28, 2014 at 1:22 PM, Richard Clapp  wrote:
>
>> can you put in your script if not too long.  Perhaps there is something
>> we can see.
>>
>>   --
>>  *From:* Ramya Manjunath 
>> *To:* user@ambari.apache.org
>> *Sent:* Friday, March 28, 2014 4:18 PM
>> *Subject:* HDP 2.1.1 API install with Ambari 1.5 fails to start YARN
>>
>> I am installing HDP 2.1.1 with Ambari 1.5.0 with only calls to the API.
>> In my script, I set the configs, add the services, hosts and components
>> with the API. I also install HDP and persist the cluster in the same
>> script. I start the services using the GUI.
>>
>> I am able to install all the services but YARN fails to start with this
>> error:
>> org.apache.hadoop.security.AccessControlException: Permission denied:
>> user=yarn, access=WRITE, inode="/":hdfs:hdfs:drwxr-xr-x
>>
>> I am able to start it after I modify the permissions as hdfs user using
>> the command hadoop fs -chmod 777 /,
>>
>> However, I do not see this issue if we install and start the services
>> using the GUI. Is there any API call that I am missing that would handle
>> this?
>>
>> Thanks,
>> Ramya
>>
>>
>>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Specifying the network name of an agent

2014-03-14 Thread Jeff Sposetti
Hi, You have an option to have the agent register with a custom hostname
(in cases like yours where "hostname -f" does not return hostname you want
to use). These instructions talk about using a custom script with your
agents.

http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.9.1/bk_using_Ambari_book/content/ch_appendix_custom_hostnames.xml.html


On Fri, Mar 14, 2014 at 11:13 AM, Benbenek, Waldyn J <
waldyn.benbe...@unisys.com> wrote:

> Hello,
>
>
>
> I am manually connecting a node to the ambari cluster.  It registers with
> the server OK, but it does so using the hostname.  Each of the nodes in the
> cluster has three network names.  I need them to talk over specific IP
> addresses, which are NOT the ip addresses associated with the hostname.  Is
> there a setting in ambari-agent.ini or some other file which will tell the
> agent how to identify itself?
>
>
>
> Thanks,
>
>
>
> Wally Benbenek
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Fw: Unable to load logon screen of Ambari Web ui

2014-01-14 Thread Jeff Sposetti
Which version of Ambari are you using? This looks like:

https://issues.apache.org/jira/browse/AMBARI-3892

And was resolved in the 1.4.2 release.


On Tue, Jan 14, 2014 at 7:53 AM, Meghavi Sugandhi
wrote:

>
> Hi,
>
> I have installed ambari-agent and ambari-server with rpm packages. They
> are working properly.
> But i am unable to view Ambari WebUI which runs on port 8080.
>
> Please find screenshot of WebUI attached here with and provide solution
> for the same.
>
> Thanks & Regards,
> Meghavi Sugandhi
> Tata Consultancy Services
> Mailto: meghavi.sugan...@tcs.com
> Website: http://www.tcs.com
>
>
>
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Journal Nodes in a multi-site environment

2013-12-11 Thread Jeff Sposetti
Adding in some Hadoop folks to chime in here.


On Wed, Dec 11, 2013 at 5:35 AM, Chadwick Banning  wrote:

> Hi all,
>
> I have an Ambari 1.4/HDP 2.0.6 environment that is split between two data
> centers -- a main site and a recovery site.  We have NameNode HA enabled
> with automatic failover and the problem we are facing is how to divide the
> journal nodes across both sites so that failover happens appropriately.
>
> It seems that only one site will always have a majority of journal nodes
> and if that site NN were to go down, the other site NN would no longer be
> able to start as it couldn't reach a majority of the journal nodes.
>
> Is there any way around this?  I know an odd number of journal nodes is
> recommended but what would happen if we were to place an even number of
> journal nodes at each site?
>
> Thanks for any input!
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.