Re: Changing the Alert Definitions

2016-04-13 Thread Henning Kropp
To follow up. I believe that the default_port is causing the issue. Why 
is it a float?


The error appeared even with DELETE. So my assumption is, since I had 
disabled alerts, due to parsing errors, for HDFS they were parsed again. 
Could this be?


I submited alert definitions with default_port: 0.0 without issues, but 
they never registered with any host. Once I removed the default_port 
from the source description, it was fine.


All with Ambari 2.2

Regards

Am 06/04/16 um 18:52 schrieb Jonathan Hurley:


This is an artifact of how Gson does its conversion. There were 
several bugs fixed in Ambari around this. I'm guessing you have an 
older version and may be hitting AMBARI-11566? In any event, can you 
provide the POST which created the definition?



On Apr 6, 2016, at 12:10 PM, Henning Kropp <mailto:hkr...@microlution.de>> wrote:


Ok, since I was getting back a 200 reply I didn't check the logs. I 
get the below error for no apprent reason. During try and error I 
removed almost all fields in a PUT request, while still getting the 
below error. What I noticed also is, that "default_port" is returned 
as a float but defined as an int AlertUri.class


com.google.gson.JsonSyntaxException: java.lang.NumberFormatException: 
For input string: ".0"
at 
com.google.gson.internal.bind.TypeAdapters$7.read(TypeAdapters.java:232)
at 
com.google.gson.internal.bind.TypeAdapters$7.read(TypeAdapters.java:222)
at 
com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.read(ReflectiveTypeAdapterFactory.java:93)
at 
com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:172)
at 
com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.read(ReflectiveTypeAdapterFactory.java:93)
at 
com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:172)

at com.google.gson.Gson.fromJson(Gson.java:795)
at com.google.gson.Gson.fromJson(Gson.java:859)
at com.google.gson.Gson$2.deserialize(Gson.java:131)
at 
org.apache.ambari.server.state.alert.AlertDefinitionFactory$AlertDefinitionSourceAdapter.deserialize(AlertDefinitionFactory.java:350)
at 
org.apache.ambari.server.state.alert.AlertDefinitionFactory$AlertDefinitionSourceAdapter.deserialize(AlertDefinitionFactory.java:294)

at com.google.gson.TreeTypeAdapter.read(TreeTypeAdapter.java:58)
at com.google.gson.Gson.fromJson(Gson.java:795)
at com.google.gson.Gson.fromJson(Gson.java:761)
at com.google.gson.Gson.fromJson(Gson.java:710)
at com.google.gson.Gson.fromJson(Gson.java:682)
at 
org.apache.ambari.server.state.alert.AlertDefinitionFactory.coerce(AlertDefinitionFactory.java:195)
at 
org.apache.ambari.server.state.alert.AlertDefinitionHash.getAlertDefinitions(AlertDefinitionHash.java:240)
at 
org.apache.ambari.server.state.alert.AlertDefinitionHash.enqueueAgentCommands(AlertDefinitionHash.java:490)
at 
org.apache.ambari.server.state.alert.AlertDefinitionHash.enqueueAgentCommands(AlertDefinitionHash.java:460)
at 
org.apache.ambari.server.events.listeners.alerts.AlertHashInvalidationListener.onAmbariEvent(AlertHashInvalidationListener.java:94)

at sun.reflect.GeneratedMethodAccessor225.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.google.common.eventbus.EventHandler.handleEvent(EventHandler.java:74)
at 
com.google.common.eventbus.EventBus.dispatch(EventBus.java:314)
at 
com.google.common.eventbus.AsyncEventBus.access$001(AsyncEventBus.java:34)
at 
com.google.common.eventbus.AsyncEventBus$1.run(AsyncEventBus.java:100)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NumberFormatException: For input string: ".0"
at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)

at java.lang.Integer.parseInt(Integer.java:569)
at java.math.BigInteger.(BigInteger.java:461)
at java.math.BigInteger.(BigInteger.java:597)
at 
com.google.gson.internal.LazilyParsedNumber.intValue(LazilyParsedNumber.java:41)

at com.google.gson.JsonPrimitive.getAsInt(JsonPrimitive.java:255)
at 
com.google.gson.internal.bind.JsonTreeReader.nextInt(JsonTreeReader.java:197)
at 
com.google.gson.internal.bind.TypeAdapters$7.read(TypeAdapters.java:230)

... 30 more

Am 06/04/16 um 16:33 schrieb Jonathan Hurley:
Alerts are automatically distributed to all hosts which match their 
service and component. So, if you created yo

Re: Registered version unavailable

2016-04-13 Thread Henning Kropp
Had the same experience. The upgrade path from 2.3.4 to 2.4 is also not 
available so you have to go over 2.3.4.7 At least that it how it 
appeared to me.


The trick with altering the DB worked for me, in the end I was just not 
able to tricker the upgrade.


Regards

Am 12/04/16 um 16:49 schrieb h...@uni.de:

Another info: I just installed the "new" version 2.3.4.7  which worked
flawlessly. So it looks like the bug is 2.4.0.0 specific.


2016-04-12 13:11 GMT+02:00  :

Hey,

can anyone recommend how to proceed with this issue?

-Seb

2016-04-04 20:19 GMT+02:00  :

Most definitely a bug:
https://community.hortonworks.com/questions/17201/registered-version-hdp-2340-is-not-listed.html#answer-25787

Am 01.04.2016 4:54 nachm. schrieb :

Hi, the output looks like this (checking for 2.2):

{
   "href" :
"http://:8080/api/v1/stacks/HDP/versions/2.2/compatible_repository_versions?fields=*,repo_id,repo_name,stack_version",
   "items" : [
 {
   "href" :
"http://:8080/api/v1/stacks/HDP/versions/2.2/compatible_repository_versions/2",
   "CompatibleRepositoryVersions" : {
 "display_name" : "HDP-2.2.4.2",
 "id" : 2,
 "repository_version" : "2.2.4.2-2",
 "stack_name" : "HDP",
 "stack_version" : "2.2",
 "upgrade_types" : [
   "ROLLING",
   "NON_ROLLING"
 ]
   },
   "operating_systems" : [
 {
   "href" :

"http://:8080/api/v1/stacks/HDP/versions/2.2/compatible_repository_versions/2/operating_systems/redhat6",
   "OperatingSystems" : {
 "os_type" : "redhat6",
 "repository_version_id" : 2,
 "stack_name" : "HDP",
 "stack_version" : "2.2"
   }
 }
   ]
 },
 {
   "href" :
"http://:8080/api/v1/stacks/HDP/versions/2.2/compatible_repository_versions/3",
   "CompatibleRepositoryVersions" : {
 "display_name" : "HDP-2.2.6.0",
 "id" : 3,
 "repository_version" : "2.2.6.0",
 "stack_name" : "HDP",
 "stack_version" : "2.2",
 "upgrade_types" : [
   "ROLLING",
   "NON_ROLLING"
 ]
   },
   "operating_systems" : [
 {
   "href" :

"http://:8080/api/v1/stacks/HDP/versions/2.2/compatible_repository_versions/3/operating_systems/redhat6",
   "OperatingSystems" : {
 "os_type" : "redhat6",
 "repository_version_id" : 3,
 "stack_name" : "HDP",
 "stack_version" : "2.2"
   }
 }
   ]
 },
 {
   "href" :
"http://:8080/api/v1/stacks/HDP/versions/2.2/compatible_repository_versions/4",
   "CompatibleRepositoryVersions" : {
 "display_name" : "HDP-2.2.6.0-2800",
 "id" : 4,
 "repository_version" : "2.2.6.0-2800",
 "stack_name" : "HDP",
 "stack_version" : "2.2",
 "upgrade_types" : [
   "ROLLING",
   "NON_ROLLING"
 ]
   },
   "operating_systems" : [
 {
   "href" :

"http://:8080/api/v1/stacks/HDP/versions/2.2/compatible_repository_versions/4/operating_systems/redhat6",
   "OperatingSystems" : {
 "os_type" : "redhat6",
 "repository_version_id" : 4,
 "stack_name" : "HDP",
 "stack_version" : "2.2"
   }
 }
   ]
 }
   ]
}

I changed the curl request to check for version/2.4 - output looks like
this:

{
   "href" :
"http://:8080/api/v1/stacks/HDP/versions/2.4/compatible_repository_versions?fields=*,repo_id,repo_name,stack_version",
   "items" : [
 {
   "href" :
"http://:8080/api/v1/stacks/HDP/versions/2.4/compatible_repository_versions/205",
   "CompatibleRepositoryVersions" : {
 "display_name" : "HDP-2.4.0.0",
 "id" : 205,
 "repository_version" : "2.4.0.0",
 "stack_name" : "HDP",
 "stack_version" : "2.4",
 "upgrade_types" : [
   "ROLLING",
   "NON_ROLLING"
 ]
   },
   "operating_systems" : [
 {
   "href" :

"http://:8080/api/v1/stacks/HDP/versions/2.4/compatible_repository_versions/205/operating_systems/redhat6",
   "OperatingSystems" : {
 "os_type" : "redhat6",
 "repository_version_id" : 205,
 "stack_name" : "HDP",
 "stack_version" : "2.4"
   }
 }
   ]
 }
   ]
}

I also tried (re-)adding multiple times and with different browsers -
nothing is working.


2016-04-01 14:55 GMT+02:00 cs user :

Hi again,

Could you send us the output of this command? (from your link)

curl -i -k -u : -X GET

http://:8080/api/v1/stacks/HDP/versions/2.2/compatible_repository_versions?fields=*,repo_id,repo_name,stack_version
To see what the server displays directly from the api.

Thanks!


On Fri, Apr 1, 2016 at 1:52 PM, cs user  wrote:

Hi There,

It looks like either it hasn't registered correctly, so you can try to
do
it again, or that the page is cach

Re: How to remove a client from a host

2016-04-07 Thread Henning Kropp

Hi,

HIVE is a service, which contains of components. In your case you want 
to remove the host component HIVE_CLIENT from a host. Here is how you 
can do it:


# Stop the component by putting it in state "INSTALLED" (yes you also 
need to stop a client)
curl -u admin:admin -H "X-Requested-by:ambari" -i -k -X PUT -d 
'{"ServiceComponentInfo": {"state": "INSTALLED"}}' 
http://:8080/api/v1/clusters//hosts//host_components/HIVE_CLIENT


# Remove the component
curl -u admin:admin -H "X-Requested-by:ambari" -i -k -X DELETE 
http://:8080/api/v1/clusters//hosts//host_components/HIVE_CLIENT


Let me know if this worked for you.

Regards,
Henning

Am 07/04/16 um 20:58 schrieb Banias H:
I have a 6-node clusters running Ambari 2.2.1.1. While installing Hive 
through Ambari, I installed Hive clients all 6 nodes but now I would 
like to have Hive client on only 1 node.


How do I remove Hive client on the 5 nodes?

I looked around and only found the REST calls to remove Hive 
altogether. I also don't find any option through Ambari UI. I am 
hoping I don't have to remove Hive and install it again as I got 
tables and database setup.


Thanks, B





Re: Changing the Alert Definitions

2016-04-06 Thread Henning Kropp
Ok, since I was getting back a 200 reply I didn't check the logs. I get 
the below error for no apprent reason. During try and error I removed 
almost all fields in a PUT request, while still getting the below error. 
What I noticed also is, that "default_port" is returned as a float but 
defined as an int AlertUri.class


com.google.gson.JsonSyntaxException: java.lang.NumberFormatException: 
For input string: ".0"
at 
com.google.gson.internal.bind.TypeAdapters$7.read(TypeAdapters.java:232)
at 
com.google.gson.internal.bind.TypeAdapters$7.read(TypeAdapters.java:222)
at 
com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.read(ReflectiveTypeAdapterFactory.java:93)
at 
com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:172)
at 
com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.read(ReflectiveTypeAdapterFactory.java:93)
at 
com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$Adapter.read(ReflectiveTypeAdapterFactory.java:172)

at com.google.gson.Gson.fromJson(Gson.java:795)
at com.google.gson.Gson.fromJson(Gson.java:859)
at com.google.gson.Gson$2.deserialize(Gson.java:131)
at 
org.apache.ambari.server.state.alert.AlertDefinitionFactory$AlertDefinitionSourceAdapter.deserialize(AlertDefinitionFactory.java:350)
at 
org.apache.ambari.server.state.alert.AlertDefinitionFactory$AlertDefinitionSourceAdapter.deserialize(AlertDefinitionFactory.java:294)

at com.google.gson.TreeTypeAdapter.read(TreeTypeAdapter.java:58)
at com.google.gson.Gson.fromJson(Gson.java:795)
at com.google.gson.Gson.fromJson(Gson.java:761)
at com.google.gson.Gson.fromJson(Gson.java:710)
at com.google.gson.Gson.fromJson(Gson.java:682)
at 
org.apache.ambari.server.state.alert.AlertDefinitionFactory.coerce(AlertDefinitionFactory.java:195)
at 
org.apache.ambari.server.state.alert.AlertDefinitionHash.getAlertDefinitions(AlertDefinitionHash.java:240)
at 
org.apache.ambari.server.state.alert.AlertDefinitionHash.enqueueAgentCommands(AlertDefinitionHash.java:490)
at 
org.apache.ambari.server.state.alert.AlertDefinitionHash.enqueueAgentCommands(AlertDefinitionHash.java:460)
at 
org.apache.ambari.server.events.listeners.alerts.AlertHashInvalidationListener.onAmbariEvent(AlertHashInvalidationListener.java:94)

at sun.reflect.GeneratedMethodAccessor225.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.google.common.eventbus.EventHandler.handleEvent(EventHandler.java:74)

at com.google.common.eventbus.EventBus.dispatch(EventBus.java:314)
at 
com.google.common.eventbus.AsyncEventBus.access$001(AsyncEventBus.java:34)
at 
com.google.common.eventbus.AsyncEventBus$1.run(AsyncEventBus.java:100)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NumberFormatException: For input string: ".0"
at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)

at java.lang.Integer.parseInt(Integer.java:569)
at java.math.BigInteger.(BigInteger.java:461)
at java.math.BigInteger.(BigInteger.java:597)
at 
com.google.gson.internal.LazilyParsedNumber.intValue(LazilyParsedNumber.java:41)

at com.google.gson.JsonPrimitive.getAsInt(JsonPrimitive.java:255)
at 
com.google.gson.internal.bind.JsonTreeReader.nextInt(JsonTreeReader.java:197)
at 
com.google.gson.internal.bind.TypeAdapters$7.read(TypeAdapters.java:230)

... 30 more

Am 06/04/16 um 16:33 schrieb Jonathan Hurley:
Alerts are automatically distributed to all hosts which match their 
service and component. So, if you created your alert definition with 
HDFS and NameNode, then Ambari will automatically push this alert 
definition to any host that's running NameNode. The host will begin 
running the alert automatically. There's really nothing that you need 
to do here; the alert framework handles everything for you.


On Apr 6, 2016, at 9:35 AM, Henning Kropp <mailto:hkr...@microlution.de>> wrote:


Actually I added a alert definition (via REST), but it does not have 
any Service/Host attached, so I was wondering how are hosts 
"attached" to an alert defintion?


It's an alert for HDFS, NAMENODE, so the definition on POST contained 
the component and service attributes, which would be enough 
information to distribute the alert on the corresponding hosts?


Sorry for the confusion. In my search for an answer I came accross 
the host-only alerts and th

Re: Changing the Alert Definitions

2016-04-06 Thread Henning Kropp
Actually I added a alert definition (via REST), but it does not have any 
Service/Host attached, so I was wondering how are hosts "attached" to an 
alert defintion?


It's an alert for HDFS, NAMENODE, so the definition on POST contained 
the component and service attributes, which would be enough information 
to distribute the alert on the corresponding hosts?


Sorry for the confusion. In my search for an answer I came accross the 
host-only alerts and thought it was related.


Thanks again for your help.

Regards,
Henning

Am 06/04/16 um 15:26 schrieb Jonathan Hurley:
I think what you're asking about is a concept known as host-level 
alerts. These are alerts which are not scoped by any particular hadoop 
service. A good example of this is the disk usage alert. It's bound 
only to a host and will be distributed and run regardless of what 
components are installed on that host.


There are two ways to add a host alert:
1) Edit the alerts.json under /var/lib/ambari-server/resources and add 
your new alert to the "AMBARI_AGENT" component.
2) Use the REST APIs to create your new alert. The service should be 
"AMBARI" and the component should be "AMBARI_AGENT".


You can use the current agent alert (disk usage) as an example:
https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/alerts.json#L31

On Apr 6, 2016, at 8:56 AM, Henning Kropp <mailto:hkr...@microlution.de>> wrote:


How can an alert be added to a host?


Am 05/04/16 um 18:41 schrieb Henning Kropp:

Worked now. Thanks.

Am 05/04/16 um 18:01 schrieb Jonathan Hurley:
The alerts.json file is only to pickup brand new alerts that are 
not currently defined in the system. It's more of a way to quickly 
seed Ambari with a default set of alerts. If the alert has already 
been created, any updates for that alert made in alerts.json will 
not be brought in. You'll need to use the REST APIs to update 
existing definitions.


You are correct that the agents run the alerts. The 
definitions.json file on each agent shows what alerts it is trying 
to run.


On Apr 5, 2016, at 11:46 AM, Henning Kropp <mailto:hkr...@microlution.de>> wrote:


Hi,

I am currently trying to change the alert definitions. I used the 
REST api to put a new definition for example for id /30 . I can 
see the changes when doing a GET.


Additionaly I replaced the alert.json of the service under 
ambari-server and ambari-agent. Still the changes are not 
reflected in /var/lib/ambari-agent/cache/alerts/definition.json 
and I suspect the alert is not working as expected because of this.


As I undestand the defintions are broadcasted with heartbeats by 
the server? And are executed on the host by the agent, where the 
service is running? Right?


What am I missing?

Thanks,
Henning















Re: Changing the Alert Definitions

2016-04-06 Thread Henning Kropp

How can an alert be added to a host?


Am 05/04/16 um 18:41 schrieb Henning Kropp:

Worked now. Thanks.

Am 05/04/16 um 18:01 schrieb Jonathan Hurley:
The alerts.json file is only to pickup brand new alerts that are not 
currently defined in the system. It's more of a way to quickly seed 
Ambari with a default set of alerts. If the alert has already been 
created, any updates for that alert made in alerts.json will not be 
brought in. You'll need to use the REST APIs to update existing 
definitions.


You are correct that the agents run the alerts. The definitions.json 
file on each agent shows what alerts it is trying to run.


On Apr 5, 2016, at 11:46 AM, Henning Kropp  
wrote:


Hi,

I am currently trying to change the alert definitions. I used the 
REST api to put a new definition for example for id /30 . I can see 
the changes when doing a GET.


Additionaly I replaced the alert.json of the service under 
ambari-server and ambari-agent. Still the changes are not reflected 
in /var/lib/ambari-agent/cache/alerts/definition.json and I suspect 
the alert is not working as expected because of this.


As I undestand the defintions are broadcasted with heartbeats by the 
server? And are executed on the host by the agent, where the service 
is running? Right?


What am I missing?

Thanks,
Henning










Re: Changing the Alert Definitions

2016-04-05 Thread Henning Kropp

Worked now. Thanks.

Am 05/04/16 um 18:01 schrieb Jonathan Hurley:

The alerts.json file is only to pickup brand new alerts that are not currently 
defined in the system. It's more of a way to quickly seed Ambari with a default 
set of alerts. If the alert has already been created, any updates for that 
alert made in alerts.json will not be brought in. You'll need to use the REST 
APIs to update existing definitions.

You are correct that the agents run the alerts. The definitions.json file on 
each agent shows what alerts it is trying to run.


On Apr 5, 2016, at 11:46 AM, Henning Kropp  wrote:

Hi,

I am currently trying to change the alert definitions. I used the REST api to 
put a new definition for example for id /30 . I can see the changes when doing 
a GET.

Additionaly I replaced the alert.json of the service under ambari-server and 
ambari-agent. Still the changes are not reflected in 
/var/lib/ambari-agent/cache/alerts/definition.json and I suspect the alert is 
not working as expected because of this.

As I undestand the defintions are broadcasted with heartbeats by the server? 
And are executed on the host by the agent, where the service is running? Right?

What am I missing?

Thanks,
Henning







Changing the Alert Definitions

2016-04-05 Thread Henning Kropp

Hi,

I am currently trying to change the alert definitions. I used the REST 
api to put a new definition for example for id /30 . I can see the 
changes when doing a GET.


Additionaly I replaced the alert.json of the service under ambari-server 
and ambari-agent. Still the changes are not reflected in 
/var/lib/ambari-agent/cache/alerts/definition.json and I suspect the 
alert is not working as expected because of this.


As I undestand the defintions are broadcasted with heartbeats by the 
server? And are executed on the host by the agent, where the service is 
running? Right?


What am I missing?

Thanks,
Henning



Re: setup-security in silent mode

2016-04-05 Thread Henning Kropp

Hi,

your are right. I created an Ansible script around this topic, maybe it 
saves you some time.


Here the steps in my ansible script:

  - name: Enable SSL
lineinfile: dest=/etc/ambari-server/conf/ambari.properties 
regexp='api.ssl' line='api.ssl=true' owner=root group=root mode=0644


  - name: Set two-way SSL
lineinfile: dest=/etc/ambari-server/conf/ambari.properties 
regexp='security.server.two_way_ssl' 
line='security.server.two_way_ssl=true' owner=root group=root mode=0644


  - name: Configure certificate path
lineinfile: dest=/etc/ambari-server/conf/ambari.properties 
regexp='client.api.ssl.cert_name' 
line='client.api.ssl.cert_name=https.crt' owner=root group=root mode=0644


  - name: Configure key path
lineinfile: dest=/etc/ambari-server/conf/ambari.properties 
regexp='client.api.ssl.key_name' 
line='client.api.ssl.key_name=https.key' owner=root group=root mode=0644


  - name: Keys direcotroy path
lineinfile: dest=/etc/ambari-server/conf/ambari.properties 
regexp='security.server.keys_dir' 
line='security.server.keys_dir=/var/lib/ambari-server/keys' owner=root 
group=root mode=0644


  - name: Truststore path
lineinfile: dest=/etc/ambari-server/conf/ambari.properties 
regexp='ssl.trustStore.path' 
line='ssl.trustStore.path=/var/lib/ambari-server/keys/keystore.p12' 
owner=root group=root mode=0644


  - name: Truststore type
lineinfile: dest=/etc/ambari-server/conf/ambari.properties 
regexp='ssl.trustStore.type' line='ssl.trustStore.type=pkcs12' 
owner=root group=root mode=0644


  - name: Truststore password
lineinfile: dest=/etc/ambari-server/conf/ambari.properties 
regexp='ssl.trustStore.password' line='ssl.trustStore.password=horton' 
owner=root group=root mode=0644


  - name: Client API SSL port
lineinfile: dest=/etc/ambari-server/conf/ambari.properties 
regexp='client.api.ssl.port' line='client.api.ssl.port=8443' owner=root 
group=root mode=0644


  - name: IPTABLES / 8443 / https web UI
command: iptables -I INPUT -p tcp --dport 8443 -s 0.0.0.0/0 -j ACCEPT

  - name: Copy Certificate to /root/
copy: src=company-bank-01.cloud.hortonworks.com.crt 
dest=/var/lib/ambari-server/keys/https.crt owner=root group=root mode=0600


  - name: Copy Private Key to /etc/ambari-server/conf/
copy: src=company-bank-01.cloud.hortonworks.com.key 
dest=/var/lib/ambari-server/keys/https.key owner=root group=root mode=0600


  - name: Create key password file
copy: src=company-key.pass.txt 
dest=/var/lib/ambari-server/keys/https.pass.txt group=root mode=0600


  - name: Create key password file
copy: src=company-key.pass.txt 
dest=/var/lib/ambari-server/keys/pass.txt group=root mode=0600


  - name: Create truststore
command: rm -f /var/lib/ambari-server/keys/https.keystore.p12

  - command: rm -f /var/lib/ambari-server/keys/keystore.p12

  - command: openssl pkcs12 -export -in 
'/var/lib/ambari-server/keys/https.crt' -inkey 
'/var/lib/ambari-server/keys/https.key' -certfile 
'/var/lib/ambari-server/keys/https.crt' -out 
'/var/lib/ambari-server/keys/https.keystore.p12' -password 
file:'/var/lib/ambari-server/keys/https.pass.txt' -passin 
file:'/var/lib/ambari-server/keys/pass.txt'


  - command: /usr/jdk64/jdk1.8.0_40/bin/keytool -import -alias 
'company-bank-01' -keystore '/var/lib/ambari-server/keys/keystore.p12' 
-storetype pkcs12 -file '/var/lib/ambari-server/keys/https.crt' 
-storepass 'horton' -noprompt


  - command: chmod 600 /var/lib/ambari-server/keys/https.keystore.p12
  - command: chmod 600 /var/lib/ambari-server/keys/keystore.p12

Regards,
Henning

Am 04/04/16 um 18:48 schrieb Lukáš Drbal:

Hi Dmitry,

thanks for replay, but its not exactly true.

"ambari-server setup-security" do some "magic" with provided SSL 
certs/keys which is stored in my situation here:
root@:/etc/ambari-server/conf# ls -la 
/var/lib/ambari-server/keys/

total 64
drwx-- 3 root root 4096 Apr  4 16:34 .
drwxr-xr-x 5 root root 4096 Mar 30 21:31 ..
-rw--- 1 root root  779 Mar 10 18:24 ca.config
-rw--- 1 root root 7153 Mar 30 21:32 ca.crt
-rw--- 1 root root 1651 Mar 30 21:32 ca.csr
-rw--- 1 root root 3311 Mar 30 21:32 ca.key
drwx-- 3 root root 4096 Mar 30 21:32 db
*-rw--- 1 root root 2698 Apr  4 16:34 https.crt*
*-rw--- 1 root root 1751 Apr  4 16:34 https.key*
*-rw--- 1 root root 4917 Apr  4 16:34 https.keystore.p12*
*-rw--- 1 root root   50 Apr  4 16:34 https.pass.txt*
*-rw--- 1 root root 5693 Mar 30 21:32 keystore.p12*
*-rw--- 1 root root   50 Mar 30 21:31 pass.txt*
*
*
https.crt has same md5sum as original certificate, but that's all what 
i know for now. Its maybe time to look into source code.



L.

On Thu, Mar 31, 2016 at 12:29 PM, Dmitry Sen > wrote:


Hi,


"ambari-server setup-security" just adds some lines to
/etc/ambari-server/conf/ambari.properties

So you can add them in non-interactive mode and restart ambari-server

​

-

Re: Trying to create hbase tables after enabling Kerberos with Ambari

2016-03-22 Thread Henning Kropp

Roberta,

please try this resource 
https://steveloughran.gitbooks.io/kerberos_and_hadoop/content/


Hope it helps.

Regards,
Henning

Am 22/03/16 um 20:49 schrieb Roberta Marton:


As I work more with Kerberos, I have questions that I cannot seem to 
figure out from the documentation and scanning the internet.  Maybe 
you can answer them.


From Ambari documentation:

“Each service and sub-service in Hadoop must have its own principal. A 
principal name in a given realm consists of a primary name and an 
instance name, which in this case is the FQDN of the host that runs 
that service. As services do not login with a password to acquire 
their tickets, their principal's authentication credentials are stored 
in a keytab file, which is extracted from the Kerberos database and 
stored locally with the service principal on the service component host.”


As part of enabling Kerberos, Ambari creates all these service 
principals and keytabs.  So my question is, how are tickets managed 
between the Hadoop services?  For example, HBase needs to talk to HDFS 
to write some data.  If I instigate this request, does HBase send my 
ticket to services, like HDFS, or does it intercept the request and 
send its own ticket to HDFS to manage the request.


How does HBase (and other Hadoop services) manage their own ticket 
renewal and expiration?  Do they use a thread to automatically renew 
the ticket like suggested in many forums?  What happens if the ticket 
expires in the middle of a request?  Is there code in each service to 
determine that a ticket is about to expire, and perform a kinit to 
create a new ticket and send it seamlessly down the line?


   Regards,

   Roberta

*From:* Robert Levas [mailto:rle...@hortonworks.com 
<mailto:rle...@hortonworks.com>]

*Sent:* Tuesday, March 22, 2016 6:45 AM
*To:* user@ambari.apache.org <mailto:user@ambari.apache.org>
*Subject:* Re: Trying to create hbase tables after enabling Kerberos 
with Ambari


Henning…

I didn’t know about that hadoop command.  This is awesome. Thanks!

hadoop org.apache.hadoop.security.HadoopKerberosName
trafodion-robertaclus...@trafkdc.com
<mailto:trafodion-robertaclus...@trafkdc.com>

Rob

*From: *Henning Kropp <mailto:hkr...@microlution.de>>
*Reply-To: *"user@ambari.apache.org <mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>

*Date: *Monday, March 21, 2016 at 5:49 PM
*To: *"user@ambari.apache.org <mailto:user@ambari.apache.org>" 
mailto:user@ambari.apache.org>>
*Subject: *Re: Trying to create hbase tables after enabling Kerberos 
with Ambari


Hi,

what Robert suggested sounds to me exactly what you would need. It 
would help if you could provide your auth_to_local setting and the 
output of hbase> whoami


Another way to test your auth_to_locals setting would be to execute:
% hadoop org.apache.hadoop.security.HadoopKerberosName 
trafodion-robertaclus...@trafkdc.com 
<mailto:trafodion-robertaclus...@trafkdc.com>


Please be aware that the rules are applied in order, so it is 
important to have the rule from Robert before the default rule.


A more simple rule could also be:
RULE:[1:$1@$0](trafidion-robertaclus...@trafkdc.com 
<mailto:trafidion-robertaclus...@trafkdc.com>)s/.*/trafodion/


The above rule will only work for this principal/user. Put it as the 
first line of your auth to local and use HadoopKerberosName to test if 
it is working.


Regards,
Henning

Am 21/03/16 um 21:40 schrieb Roberta Marton:

Thanks for your suggestion.  My property settings did have the
second rule defined but not the first.

However, it did not seem to help.

I tried setting the rule several other ways but nothing seems to
work.  I still get the same behavior.

Roberta

*From:*Robert Levas [mailto:rle...@hortonworks.com
<mailto:rle...@hortonworks.com>]
*Sent:* Monday, March 21, 2016 11:21 AM
*To:* user@ambari.apache.org <mailto:user@ambari.apache.org>
*Subject:* Re: Trying to create hbase tables after enabling
Kerberos with Ambari

Hi Roberta…

It seems like you need an auth-to-local run set up to translate
trafodion-robertaclus...@trafkdc.com
<mailto:trafodion-robertaclus...@trafkdc.com>to trafodion.

To can do this by editing the hadoop.security.auth_to_local
property under HDFS->Configs->Advanced->Advanced core-site.

Adding the following rule should do the trick:

RULE:[1:$1@$0](.*-robertaclus...@trafkdc.com)s/-robertaCluster@.*//
<mailto:.*-robertaclus...@trafkdc.com%29s/-robertaCluster@.*//>

You will need to add this rule to the ruleset before/above less
general rules like

RULE:[1:$1@$0](.*@TRAFKDC.COM)s/@.*//
<mailto:.*@TRAFKDC.COM%29s/@.*//>

After adding this rule, save the config and restart the
recommended services.

I hope this helps,

Rob

*

Re: Trying to create hbase tables after enabling Kerberos with Ambari

2016-03-21 Thread Henning Kropp

Hi,

what Robert suggested sounds to me exactly what you would need. It would 
help if you could provide your auth_to_local setting and the output of 
hbase> whoami


Another way to test your auth_to_locals setting would be to execute:
% hadoop org.apache.hadoop.security.HadoopKerberosName 
trafodion-robertaclus...@trafkdc.com 



Please be aware that the rules are applied in order, so it is important 
to have the rule from Robert before the default rule.


A more simple rule could also be:
RULE:[1:$1@$0](trafidion-robertaclus...@trafkdc.com)s/.*/trafodion/

The above rule will only work for this principal/user. Put it as the 
first line of your auth to local and use HadoopKerberosName to test if 
it is working.


Regards,
Henning


Am 21/03/16 um 21:40 schrieb Roberta Marton:


Thanks for your suggestion.  My property settings did have the second 
rule defined but not the first.


However, it did not seem to help.

I tried setting the rule several other ways but nothing seems to 
work.  I still get the same behavior.


Roberta

*From:* Robert Levas [mailto:rle...@hortonworks.com 
]

*Sent:* Monday, March 21, 2016 11:21 AM
*To:* user@ambari.apache.org 
*Subject:* Re: Trying to create hbase tables after enabling Kerberos 
with Ambari


Hi Roberta…

It seems like you need an auth-to-local run set up to translate 
trafodion-robertaclus...@trafkdc.com 
to trafodion.


To can do this by editing the hadoop.security.auth_to_local property 
under HDFS->Configs->Advanced->Advanced core-site.


Adding the following rule should do the trick:

RULE:[1:$1@$0](.*-robertaclus...@trafkdc.com)s/-robertaCluster@.*// 


You will need to add this rule to the ruleset before/above less 
general rules like


RULE:[1:$1@$0](.*@TRAFKDC.COM)s/@.*//


After adding this rule, save the config and restart the recommended 
services.


I hope this helps,

Rob

*From: *Roberta Marton >
*Reply-To: *"user@ambari.apache.org " 
mailto:user@ambari.apache.org>>

*Date: *Monday, March 21, 2016 at 2:08 PM
*To: *"user@ambari.apache.org " 
mailto:user@ambari.apache.org>>
*Subject: *Trying to create hbase tables after enabling Kerberos with 
Ambari


I am trying to install Kerberos on top of my Hortonworks 
installation.  I have tried this with both versions 2.2 and 2.3 and 
get similar results.


After I enable Kerberos, I create a Linux user called trafodion and 
grant this user all HBase permissions.


I connect as trafodion but get permission errors when I try to create 
a table.


Details:

[trafodion@myhost ~]$ whoami

trafodion

[trafodion@myhost ~]$ klist

Ticket cache: FILE:/tmp/krb5cc_503

Default principal: trafodion-robertaclus...@trafkdc.com 



Valid starting ExpiresService principal

03/21/16 16:39:33  03/22/16 16:39:33 krbtgt/trafkdc@trafkdc.com 



renew until 03/21/16 16:39:33

hbase shell

hbase(main):002:0> whoami

trafodion-robertaclus...@trafkdc.com 
(auth:KERBEROS)OIw


2016-03-21 17:06:22,925 WARN  [main] security.UserGroupInformation: No 
groups available for user trafodion-robertaCluster


hbase(main):003:0> user_permission

User Table,Family,Qualifier:Permission

trafodion hbase:acl,,: [Permission: actions=READ,WRITE,EXEC,CREATE,ADMIN]

ambari-qa hbase:acl,,: [Permission: actions=READ,WRITE,EXEC,CREATE,ADMIN]

2 row(s) in 1.7630 seconds

hbase(main):004:0> create 't1', 'f1', 'f2'

ERROR: org.apache.hadoop.hbase.security.AccessDeniedException: 
Insufficient permissions for user 'trafodion-robertaCluster' (global, 
action=CREATE)


I am able to perform ‘user_permission’ but not ‘create’

Any suggestion on how to proceed?

Roberta





Re: Disable Custom Config Section for Service

2016-03-10 Thread Henning Kropp
Thank you Matt!



I did use <configuration supports_final="true" 
supports_do_not_extend="true">  you wouldn't happen to know the difference 
between supports_do_not_extend and supports_adding_forbidden? 



Or maybe there is no difference, but just depending on the Ambari version?



Regards,

Henning



BTW: I also asked the question here: 
https://community.hortonworks.com/questions/22295/how-do-i-disable-custom-config-section-for-a-servi.html
 If you would like to answer?



 On Do, 10 Mrz 2016 10:35:40 +0100 Mithun Mathew 
<mithm...@gmail.com>wrote  




You can use an attribute called supports_adding_forbidden inside the 
configuration tag.



Something like this:



<configuration supports_adding_forbidden="true">





Regards

Matt




On Thu, Mar 10, 2016 at 12:50 AM, Henning Kropp <hkr...@microlution.de> 
wrote:



Hi,



how can I disable the "Custom" properties section for a config file defined in 
a newly created service for Ambari?



Thanks and regards,

Henning














-- 

Mithun Mathew (Matt)

www.linkedin.com/in/mithunmatt/


















Disable Custom Config Section for Service

2016-03-10 Thread Henning Kropp
Hi,

how can I disable the "Custom" properties section for a config file defined in 
a newly created service for Ambari?

Thanks and regards,
Henning







Re: Ambari commandScript running on ambari server itself

2015-11-11 Thread Henning Kropp
Hi,



maybe 
/var/lib/ambari-server/resources/custom_action_definitions/system_action_definitions.xml
 and /var/lib/ambari-server/resources/custom_actions/scripts could work for 
your purposes?



Regards





  On Mi, 11 Nov 2015 19:27:04 +0100 Constantine Yarovoy 
 wrote 




Hi!



I am developing my own custom app stack and bunch of services for Ambari (not 
related to HDP).

And I have a specific requirement - run some code locally on Ambari server 
right after deployment process starts but before any service components are 
installed on chosen hosts.


This "local" code is supposed to do 2 operations:

1) generate some files (e.g. certificates/tokens)

2) that are to be copied/distributed to all master hosts.



Something similar can be found in Ansible and it's named "local-action".



Is there any recommended way to declare such local command script that runs on 
Ambari Server itself ? I mean, of course, I have 2 options for desired 
automation, but they all "suck", I think:



- I can code it in Ansible or bash and run prior to cluster deployment

- I can code it in inside master commandScript's install function with 
Execute("generate.sh > generate.out && scp generate.out 
user@master-host-1:/home/user/").



But it'd be much better if Ambari had such functionality and I could have done 
all operations from just a single Ambari Web Ui.



Any suggestions ?