Re: Ambari Metric Collector keeps stopping with FATAL error

2016-10-07 Thread Dmitry Sen
?Hi,


Probably there is no enough resources on this single node to run all deployed 
services.

If it fails right after start, try to increase 
timeline.metrics.service.watcher.initial.delay?

If AMS works fine, but just stops, you can set 
timeline.metrics.service.watcher.disabled? to true as a workaround.



From: Chin Wei Low 
Sent: Friday, October 07, 2016 5:31 AM
To: user@ambari.apache.org
Subject: Ambari Metric Collector keeps stopping with FATAL error

Hi,

I am running Ambari 2.2 and the Ambari Metric Collector keeps stopping with the 
following FATAL error. This is a single node deployment with embedded hBase.


FATAL 
org.apache.hadoop.yarn.server.applicationhistoryservice.metrics.timeline.TimelineMetricStoreWatcher:
 Error getting metrics from TimelineMetricStore. Shutting down by 
TimelineMetricStoreWatcher.

I have try to set timeline.metrics.service.watcher.timeout to 90, but it does 
not help.

Any help will be appreciated.


Regards,
Chin Wei



Re: 2.2.1.1 and Kafka widget

2016-04-12 Thread Dmitry Sen
They will re-appear after upgrade to Ambari 2.2.2 and you'll be able to add any 
customized widgets. I'm not sure if it can be fixed without upgrade.



From: Perroud Benoit <ben...@noisette.ch>
Sent: Tuesday, April 12, 2016 11:15 PM
To: user@ambari.apache.org
Subject: Re: 2.2.1.1 and Kafka widget

Thanks for the quick answer. I'm wondering if with the fix, the widget will 
re-appear or if it is easy got re-add them.

Thanks

Benoit



On Apr 12, 2016, at 4:16 PM, Dmitry Sen 
<d...@hortonworks.com<mailto:d...@hortonworks.com>> wrote:

Hi Benoid,

That's a known bug, it's solved in 
https://issues.apache.org/jira/browse/AMBARI-15759 for Ambari 2.2.2
?


From: Benoit Perroud <ben...@noisette.ch<mailto:ben...@noisette.ch>>
Sent: Tuesday, April 12, 2016 4:46 PM
To: user@ambari.apache.org<mailto:user@ambari.apache.org>
Subject: 2.2.1.1 and Kafka widget

Hi All,

2.2.1.1 release notes mention 
https://issues.apache.org/jira/browse/AMBARI-14941 which is supposed to enhance 
Kafka and Storm widget.

I did deploy 2.2.1.1 and the Kafka widgets disappeared, but the Create Widget 
button is not working.

In the dev consol, I got the following error:

Uncaught TypeError: Cannot read property 'mapProperty' of 
undefinedmodule.exports.Em.Route.extend.createServiceWidget @ 
app.js:80158Ember.StateManager.Ember.State.extend.sendRecursively @ 
vendor.js:17348Ember.StateManager.Ember.State.extend.sendRecursively @ 
vendor.js:17352Ember.StateManager.Ember.State.extend.sendRecursively @ 
vendor.js:17352Ember.StateManager.Ember.State.extend.sendRecursively @ 
vendor.js:17352Ember.StateManager.Ember.State.extend.send @ 
vendor.js:17333App.MainServiceInfoSummaryController.Em.Controller.extend.createWidget
 @ app.js:24925App.MainServiceInfoSummaryView.Em.View.extend.doWidgetAction @ 
app.js:198659ActionHelper.registeredActions.(anonymous function).handler @ 
vendor.js:21227(anonymous function) @ vendor.js:13019f.event.dispatch @ 
vendor.js:126h.handle.i @ vendor.js:126

Did someone manage to create a widget for Kafka?

Thanks

Benoit





Re: 2.2.1.1 and Kafka widget

2016-04-12 Thread Dmitry Sen
Hi Benoid,


That's a known bug, it's solved in 
https://issues.apache.org/jira/browse/AMBARI-15759 for Ambari 2.2.2

?



From: Benoit Perroud 
Sent: Tuesday, April 12, 2016 4:46 PM
To: user@ambari.apache.org
Subject: 2.2.1.1 and Kafka widget

Hi All,

2.2.1.1 release notes mention 
https://issues.apache.org/jira/browse/AMBARI-14941 which is supposed to enhance 
Kafka and Storm widget.

I did deploy 2.2.1.1 and the Kafka widgets disappeared, but the Create Widget 
button is not working.

In the dev consol, I got the following error:

Uncaught TypeError: Cannot read property 'mapProperty' of 
undefinedmodule.exports.Em.Route.extend.createServiceWidget @ 
app.js:80158Ember.StateManager.Ember.State.extend.sendRecursively @ 
vendor.js:17348Ember.StateManager.Ember.State.extend.sendRecursively @ 
vendor.js:17352Ember.StateManager.Ember.State.extend.sendRecursively @ 
vendor.js:17352Ember.StateManager.Ember.State.extend.sendRecursively @ 
vendor.js:17352Ember.StateManager.Ember.State.extend.send @ 
vendor.js:17333App.MainServiceInfoSummaryController.Em.Controller.extend.createWidget
 @ app.js:24925App.MainServiceInfoSummaryView.Em.View.extend.doWidgetAction @ 
app.js:198659ActionHelper.registeredActions.(anonymous function).handler @ 
vendor.js:21227(anonymous function) @ vendor.js:13019f.event.dispatch @ 
vendor.js:126h.handle.i @ vendor.js:126

Did someone manage to create a widget for Kafka?

Thanks

Benoit



Re: setup-security in silent mode

2016-03-31 Thread Dmitry Sen
Hi,


"ambari-server setup-security" just adds some lines to 
/etc/ambari-server/conf/ambari.properties

So you can add them in non-interactive mode and restart ambari-server

?


From: Lukás Drbal 
Sent: Thursday, March 31, 2016 1:01 AM
To: user@ambari.apache.org
Subject: setup-security in silent mode

Hi,

is there any way how to setup security for ambari (https) in non interactive 
mode?
I need update my ansible role for ambari server and use https but all what i 
find use comman "ambari-server setup-security" in interactive mode. Its 
possible use some args?

Thanks.

--
Save The World - http://www.worldcommunitygrid.org/
http://www.worldcommunitygrid.org/stat/viewMemberInfo.do?userName=LesTR

LesTR


Re: Do we have API for change password

2016-03-21 Thread Dmitry Sen
It works for me:


curl -i -u admin:admin  'http://192.168.120.6:1081/api/v1/users/admin' -X PUT 
-H 'X-Requested-By: ambari' --data-binary 
'{"Users/password":"newpasswd","Users/old_password":"admin"}'


BR,

Dmytro Sen



From: Satyanarayana Jampa 
Sent: Monday, March 21, 2016 7:59 AM
To: user@ambari.apache.org
Subject: Do we have API for change password

Hi,
  Just wondering if we have an API to change the Ambari password, instead of 
logging in and changing the password.
  Actually I have a custom script which will take username and password as 
argument which I have to use it for configuration before installing Ambari.

Thanks,
Satya.



Re: Metrics build failure

2016-02-12 Thread Dmitry Sen
veMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
Caused by: org.apache.maven.plugin.MojoExecutionException: An Ant 
BuildException has occured: exec returned: 1
around Ant part ..
 @ 4:287 in 
/data/stage/apache-ambari-2.2.1-src/ambari-metrics/ambari-metrics-host-monitoring/target/antrun/build-psutils-compile.xml
at org.apache.maven.plugin.antrun.AntRunMojo.execute(AntRunMojo.java:355)
at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207)
... 20 more
Caused by: 
/data/stage/apache-ambari-2.2.1-src/ambari-metrics/ambari-metrics-host-monitoring/target/antrun/build-psutils-compile.xml:4:
 exec returned: 1
at org.apache.tools.ant.taskdefs.ExecTask.runExecute(ExecTask.java:646)
at org.apache.tools.ant.taskdefs.ExecTask.runExec(ExecTask.java:672)
at org.apache.tools.ant.taskdefs.ExecTask.execute(ExecTask.java:498)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:390)
at org.apache.tools.ant.Target.performTasks(Target.java:411)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
at org.apache.tools.ant.Project.executeTarget(Project.java:1368)
at org.apache.maven.plugin.antrun.AntRunMojo.execute(AntRunMojo.java:327)
... 22 more
[ERROR]
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :ambari-metrics-host-monitoring


Ram

On Thu, Feb 11, 2016 at 11:37 AM, rammohan ganapavarapu 
<rammohanga...@gmail.com<mailto:rammohanga...@gmail.com>> wrote:

I tried that too still same error.

Thanks

On Feb 11, 2016 10:26 AM, "Dmitry Sen" 
<d...@hortonworks.com<mailto:d...@hortonworks.com>> wrote:

2.2.1 is incorrect value for version

try

mvn -B -e versions:set -DnewVersion=2.2.1.0


BR,

Dmytro Sen



From: rammohan ganapavarapu 
<rammohanga...@gmail.com<mailto:rammohanga...@gmail.com>>
Sent: Thursday, February 11, 2016 7:56 PM
To: user@ambari.apache.org<mailto:user@ambari.apache.org>
Subject: Metrics build failure

Hi,

I am trying to build apache-ambari-2.2.1 from source, i am getting this error 
for ambari-metrics project any help to resolve this would be great.

[ERROR] Failed to execute goal 
org.codehaus.mojo:build-helper-maven-plugin:1.8:regex-property 
(parse-package-version) on project ambari-metrics: No match to regex 
'^([0-9]+)\.([0-9]+)\.([0-9]+)\.([0-9]+)(\.|-).*' found in '2.2.1'. -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.codehaus.mojo:build-helper-maven-plugin:1.8:regex-property 
(parse-package-version) on project ambari-metrics: No match to regex 
'^([0-9]+)\.([0-9]+)\.([0-9]+)\.([0-9]+)(\.|-).*' found in '2.2.1'.
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:212)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
at org.apache.maven.DefaultMaven

Re: Using custom installer for Ambari Service

2016-02-11 Thread Dmitry Sen
It'll work.

The proper place to to this is install() method in python scripts, but you 
should also handle the case when install() method called multiple times.


Nevertheless, using rpm/deb packages is the best choice here.


Dmytro Sen



From: priyanka gugale 
Sent: Thursday, February 11, 2016 7:42 AM
To: user@ambari.apache.org
Subject: Using custom installer for Ambari Service

Hi,

We are planning to write Ambari service for one of our proprietary product. The 
installer we have is self extracting archive. Is it a good idea if we write the 
installer script to download and install this already available install 
package. Or we do have to create linux distribution packages in the form of rpm 
or deb?

-Priyanka


Re: Metrics build failure

2016-02-11 Thread Dmitry Sen
2.2.1 is incorrect value for version

try

mvn -B -e versions:set -DnewVersion=2.2.1.0


BR,

Dmytro Sen



From: rammohan ganapavarapu 
Sent: Thursday, February 11, 2016 7:56 PM
To: user@ambari.apache.org
Subject: Metrics build failure

Hi,

I am trying to build apache-ambari-2.2.1 from source, i am getting this error 
for ambari-metrics project any help to resolve this would be great.

[ERROR] Failed to execute goal 
org.codehaus.mojo:build-helper-maven-plugin:1.8:regex-property 
(parse-package-version) on project ambari-metrics: No match to regex 
'^([0-9]+)\.([0-9]+)\.([0-9]+)\.([0-9]+)(\.|-).*' found in '2.2.1'. -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.codehaus.mojo:build-helper-maven-plugin:1.8:regex-property 
(parse-package-version) on project ambari-metrics: No match to regex 
'^([0-9]+)\.([0-9]+)\.([0-9]+)\.([0-9]+)(\.|-).*' found in '2.2.1'.
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:212)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:863)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:199)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
Caused by: org.apache.maven.plugin.MojoFailureException: No match to regex 
'^([0-9]+)\.([0-9]+)\.([0-9]+)\.([0-9]+)(\.|-).*' found in '2.2.1'.
at 
org.codehaus.mojo.buildhelper.RegexPropertyMojo.execute(RegexPropertyMojo.java:105)
at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207)
... 20 more


Thanks,


Re: How to restart services when machine restart ?

2015-12-22 Thread Dmitry Sen
Hi Jeff,


Yes. It's been implemented in Ambari 2.1.0

See https://issues.apache.org/jira/browse/AMBARI-10029?

It has attached design doc.


To turn it on, just add line "recovery.type=AUTO_START" to ambari.properties 
config file and restart the ambari-server.


BR,

Dmytro Sen



From: Jeff Zhang 
Sent: Tuesday, December 22, 2015 1:42 PM
To: user@ambari.apache.org
Subject: How to restart services when machine restart ?


I test ambari on VM, and each time when I restart this VM, I have to restart 
the services manually. Does ambari support service restarted automatically just 
like the HDP sandbox ? Thanks

--
Best Regards

Jeff Zhang


Re: Posting Metrics to Ambari - Metrics Collector

2015-12-10 Thread Dmitry Sen

POST/GET metrics to 
http://127.0.0.1:6188/ws/v1/timeline/metrics/?



From: matt Lieber 
Sent: Thursday, December 10, 2015 7:08 PM
To: user@ambari.apache.org
Subject: Fwd: Posting Metrics to Ambari - Metrics Collector



hi,

I am trying to post metrics to Ambari. I started the Metric collector in Ambari 
on their sandbox. Then i followed instructions on 
https://cwiki.apache.org/confluence/display/AMBARI/Metrics+Collector+API+Specification
 and posted:


$ wget --post-data="{

  "metrics": [

{

  "metricname": "AMBARI_METRICS.SmokeTest.FakeMetric",

  "appid": "amssmoketestfake",

  "hostname": "127.0.0.1",

  "timestamp": 1432075898000,

  "starttime": 1432075898000,

  "metrics": {

"1432075898000": 0.963781711428,

"1432075899000": 1432075898000

  }

}

  ]

}" http://127.0.0.1:6188


--2015-12-10 08:48:42--  (try: 6)  http://127.0.0.1:6188/

Connecting to 127.0.0.1:6188... connected.

HTTP request sent, awaiting response... 302 Found

Location: http://127.0.0.1:6188/applicationhistory [following]

--2015-12-10 08:48:44--  http://127.0.0.1:6188/applicationhistory

Reusing existing connection to 127.0.0.1:6188.

HTTP request sent, awaiting response... 200 OK

Length: 5218 (5.1K) [text/html]

Saving to: `index.html.1'


100%[>]
 5,218   --.-K/s   in 0s


2015-12-10 08:48:45 (237 MB/s) - `index.html.1' saved [5218/5218]


Then i looked at the log, started in debug mode, but cant find a trace of the 
metric ..?

# cat /var/log/ambari-metrics-collector/ambari-metrics-collector.log | grep 
SmokeTest

[root@sandbox ~]#

I also looked via a query on Phoenix, but to no availability.

anyone knows what's going on ?


cheers,
Matt



Re: Ambari metrics collector dies in 2.1.3-snapshot

2015-12-03 Thread Dmitry Sen
Hi,

Check if you have hadoop native libs in java.library.path

[root@c6404 cache]# ll /usr/lib/ams-hbase/lib/hadoop-native/
total 4688
-rw-r--r-- 1 root root 1319074 гру  3 03:24 libhadoop.a
-rw-r--r-- 1 root root 1487444 гру  3 03:24 libhadooppipes.a
-rw-r--r-- 1 root root  775455 гру  3 03:24 libhadoop.so
-rw-r--r-- 1 root root  582760 гру  3 03:24 libhadooputils.a
-rw-r--r-- 1 root root  366380 гру  3 03:24 libhdfs.a
-rw-r--r-- 1 root root  230225 гру  3 03:24 libhdfs.so
-rw-r--r-- 1 root root   19848 гру  3 03:24 libsnappy.so.1

If no, the collector RPM hasn't been built correctly

From: Eirik Thorsnes 
Sent: Thursday, December 03, 2015 8:16 PM
To: user@ambari.apache.org
Subject: Ambari metrics collector dies in 2.1.3-snapshot

Hi,

I'm testing Ambari 2.1.3-snapshot (from Dec 1st, a830cc0) on HDP2.3.0
stack. In this setup Ambari-metrics-collector dies after some minutes
with the below log-paste (note the "FATAL" error, this comes after many
of the exceptions seen on top).

Possibly related to the pasted error below:

On startup it fails to load the native libraries, from the log:

2015-12-03 18:40:44,296  WARN [main] NativeCodeLoader:62 - Unable to
load native-hadoop library for your platform... using builtin-java
classes where applicable

even though they exist in the java.library.path given some lines below
in the log:

2015-12-03 18:40:44,396  INFO [main] ZooKeeper:100 - Client
environment:java.library.path=/usr/lib/ams-hbase/lib/hadoop-native -Xmx3072m

I also tried to replace the path above with a symlink to the
hadoop-client/lib/native dir (which has different content) - but this
did not help.

=== paste ===

Thu Dec 03 18:26:25 CET 2015,
RpcRetryingCaller{globalStartTime=1449163034289, pause=100, retries=35},
java.io.IOException: java.io.IOException:
java.lang.NoClassDefFoundError: org/iq8
0/snappy/CorruptionException
at
org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:78)
at
org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService.callMethod(ServerCachingProtos.java:3200)
at
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7390)
at
org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1873)
at
org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1855)
at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at
org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NoClassDefFoundError:
org/iq80/snappy/CorruptionException
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at
org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:72)
... 10 more
Caused by: java.lang.ClassNotFoundException:
org.iq80.snappy.CorruptionException
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 13 more

at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
at
org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:95)
at
org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callMethod(CoprocessorRpcChannel.java:56)
at
org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService$Stub.addServerCache(ServerCachingProtos.java:3270)
at
org.apache.phoenix.cache.ServerCacheClient$1$1.call(ServerCacheClient.java:204)
at
org.apache.phoenix.cache.ServerCacheClient$1$1.call(ServerCacheClient.java:189)
at org.apache.hadoop.hbase.client.HTable$16.call(HTable.java:1741)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: java.io.IOException:
java.lang.NoClassDefFoundError: org/iq80/snappy/CorruptionException
at
org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:78)
at

Re: Ambari metrics collector dies in 2.1.3-snapshot

2015-12-03 Thread Dmitry Sen
As a workaround for already deployed cluster, you can set java.library.path 
option to /usr/hdp/2.3.4.0-3371/hadoop/lib/native/ in ams-hbase-env config

From: Dmitry Sen <d...@hortonworks.com>
Sent: Thursday, December 03, 2015 8:22 PM
To: user@ambari.apache.org
Subject: Re: Ambari metrics collector dies in 2.1.3-snapshot

Hi,

Check if you have hadoop native libs in java.library.path

[root@c6404 cache]# ll /usr/lib/ams-hbase/lib/hadoop-native/
total 4688
-rw-r--r-- 1 root root 1319074 гру  3 03:24 libhadoop.a
-rw-r--r-- 1 root root 1487444 гру  3 03:24 libhadooppipes.a
-rw-r--r-- 1 root root  775455 гру  3 03:24 libhadoop.so
-rw-r--r-- 1 root root  582760 гру  3 03:24 libhadooputils.a
-rw-r--r-- 1 root root  366380 гру  3 03:24 libhdfs.a
-rw-r--r-- 1 root root  230225 гру  3 03:24 libhdfs.so
-rw-r--r-- 1 root root   19848 гру  3 03:24 libsnappy.so.1

If no, the collector RPM hasn't been built correctly

From: Eirik Thorsnes <eirik.thors...@uni.no>
Sent: Thursday, December 03, 2015 8:16 PM
To: user@ambari.apache.org
Subject: Ambari metrics collector dies in 2.1.3-snapshot

Hi,

I'm testing Ambari 2.1.3-snapshot (from Dec 1st, a830cc0) on HDP2.3.0
stack. In this setup Ambari-metrics-collector dies after some minutes
with the below log-paste (note the "FATAL" error, this comes after many
of the exceptions seen on top).

Possibly related to the pasted error below:

On startup it fails to load the native libraries, from the log:

2015-12-03 18:40:44,296  WARN [main] NativeCodeLoader:62 - Unable to
load native-hadoop library for your platform... using builtin-java
classes where applicable

even though they exist in the java.library.path given some lines below
in the log:

2015-12-03 18:40:44,396  INFO [main] ZooKeeper:100 - Client
environment:java.library.path=/usr/lib/ams-hbase/lib/hadoop-native -Xmx3072m

I also tried to replace the path above with a symlink to the
hadoop-client/lib/native dir (which has different content) - but this
did not help.

=== paste ===

Thu Dec 03 18:26:25 CET 2015,
RpcRetryingCaller{globalStartTime=1449163034289, pause=100, retries=35},
java.io.IOException: java.io.IOException:
java.lang.NoClassDefFoundError: org/iq8
0/snappy/CorruptionException
at
org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:78)
at
org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService.callMethod(ServerCachingProtos.java:3200)
at
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7390)
at
org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1873)
at
org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1855)
at
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32209)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at
org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NoClassDefFoundError:
org/iq80/snappy/CorruptionException
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at
org.apache.phoenix.coprocessor.ServerCachingEndpointImpl.addServerCache(ServerCachingEndpointImpl.java:72)
... 10 more
Caused by: java.lang.ClassNotFoundException:
org.iq80.snappy.CorruptionException
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 13 more

at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
at
org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:95)
at
org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callMethod(CoprocessorRpcChannel.java:56)
at
org.apache.phoenix.coprocessor.generated.ServerCachingProtos$ServerCachingService$Stub.addServerCache(ServerCachingProtos.java:3270)
at
org.apache.phoenix.cache.ServerCacheClient$1$1.call(ServerCacheClient.java:204)
at
org.apache.phoenix.cache.ServerCacheClient$1$1.call(ServerCacheClient.java:189)
at org.apache.hadoop.hbase.client.HTable$16.call(HTable.java:1741)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadP

Re: Ambari commandScript running on ambari server itself

2015-11-12 Thread Dmitry Sen
5. Check YOUR_MASTER_COMPONENT? for deployment and ambari server node


From: Dmitry Sen <d...@hortonworks.com>
Sent: Thursday, November 12, 2015 9:54 AM
To: user@ambari.apache.org
Subject: Re: Ambari commandScript running on ambari server itself


Hi,


You can

1. Define a new master component for your service in metainfo.xml

2. Create for your stack role_command_order.json like 
stacks/HDP/2.3/role_command_order.json? , add here dependency

"YOUR_OTHER_COMPONENTS-START" : ["YOUR_MASTER_COMPONENT-START"],

or

"YOUR_OTHER_COMPONENTS-INSTALL" : ["YOUR_MASTER_COMPONENT-START"],?

3. Implement certificates generation logic at 
YOUR_SERVICE/1.0/scripts/master_component.py . The certificates should be 
located under /var/lib/ambari-server/resources/ , it's a shared directory by 
ambari server:8080

4. Install ambari agent on the Ambari Server node


BR,

Dmytro Sen


From: Henning Kropp <hkr...@microlution.de>
Sent: Wednesday, November 11, 2015 11:21 PM
To: user@ambari.apache.org
Subject: Re: Ambari commandScript running on ambari server itself

Hi,

maybe 
/var/lib/ambari-server/resources/custom_action_definitions/system_action_definitions.xml
 and /var/lib/ambari-server/resources/custom_actions/scripts could work for 
your purposes?

Regards


 On Mi, 11 Nov 2015 19:27:04 +0100 Constantine Yarovoy <kyaro...@gmail.com> 
wrote 

Hi!

I am developing my own custom app stack and bunch of services for Ambari (not 
related to HDP).
And I have a specific requirement - run some code locally on Ambari server 
right after deployment process starts but before any service components are 
installed on chosen hosts.
This "local" code is supposed to do 2 operations:
1) generate some files (e.g. certificates/tokens)
2) that are to be copied/distributed to all master hosts.

Something similar can be found in Ansible and it's named "local-action".

Is there any recommended way to declare such local command script that runs on 
Ambari Server itself ? I mean, of course, I have 2 options for desired 
automation, but they all "suck", I think:

- I can code it in Ansible or bash and run prior to cluster deployment
- I can code it in inside master commandScript's install function with 
Execute("generate.sh > generate.out && scp generate.out 
user@master-host-1:/home/user/").

But it'd be much better if Ambari had such functionality and I could have done 
all operations from just a single Ambari Web Ui.

Any suggestions ?



Re: Ambari commandScript running on ambari server itself

2015-11-12 Thread Dmitry Sen
5. Check YOUR_MASTER_COMPONENT? for deployment on ambari server node

|5. Check YOUR_MASTER_COMPONENT? for deployment and ambari server node?


From: Dmitry Sen <d...@hortonworks.com>
Sent: Thursday, November 12, 2015 10:30 AM
To: user@ambari.apache.org
Subject: Re: Ambari commandScript running on ambari server itself


5. Check YOUR_MASTER_COMPONENT? for deployment and ambari server node


From: Dmitry Sen <d...@hortonworks.com>
Sent: Thursday, November 12, 2015 9:54 AM
To: user@ambari.apache.org
Subject: Re: Ambari commandScript running on ambari server itself


Hi,


You can

1. Define a new master component for your service in metainfo.xml

2. Create for your stack role_command_order.json like 
stacks/HDP/2.3/role_command_order.json? , add here dependency

"YOUR_OTHER_COMPONENTS-START" : ["YOUR_MASTER_COMPONENT-START"],

or

"YOUR_OTHER_COMPONENTS-INSTALL" : ["YOUR_MASTER_COMPONENT-START"],?

3. Implement certificates generation logic at 
YOUR_SERVICE/1.0/scripts/master_component.py . The certificates should be 
located under /var/lib/ambari-server/resources/ , it's a shared directory by 
ambari server:8080

4. Install ambari agent on the Ambari Server node


BR,

Dmytro Sen


From: Henning Kropp <hkr...@microlution.de>
Sent: Wednesday, November 11, 2015 11:21 PM
To: user@ambari.apache.org
Subject: Re: Ambari commandScript running on ambari server itself

Hi,

maybe 
/var/lib/ambari-server/resources/custom_action_definitions/system_action_definitions.xml
 and /var/lib/ambari-server/resources/custom_actions/scripts could work for 
your purposes?

Regards


 On Mi, 11 Nov 2015 19:27:04 +0100 Constantine Yarovoy <kyaro...@gmail.com> 
wrote 

Hi!

I am developing my own custom app stack and bunch of services for Ambari (not 
related to HDP).
And I have a specific requirement - run some code locally on Ambari server 
right after deployment process starts but before any service components are 
installed on chosen hosts.
This "local" code is supposed to do 2 operations:
1) generate some files (e.g. certificates/tokens)
2) that are to be copied/distributed to all master hosts.

Something similar can be found in Ansible and it's named "local-action".

Is there any recommended way to declare such local command script that runs on 
Ambari Server itself ? I mean, of course, I have 2 options for desired 
automation, but they all "suck", I think:

- I can code it in Ansible or bash and run prior to cluster deployment
- I can code it in inside master commandScript's install function with 
Execute("generate.sh > generate.out && scp generate.out 
user@master-host-1:/home/user/").

But it'd be much better if Ambari had such functionality and I could have done 
all operations from just a single Ambari Web Ui.

Any suggestions ?



Re: Unable to Start Ambari Metrics Collector

2015-11-11 Thread Dmitry Sen
Hi,


In the RS log:

org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth 
for /ams-hbase-secure/running


1. Do you have ams keytabs at /etc/security/keytabs ?

If no, create and put keytabs to /etc/security/keytabs (see principal names and 
keytabs paths at ams-hbase-security-site config) The easiest way is to disable 
security for the cluster and enable it again, Security Wizard will create all 
the required keytabs for you.

2. Check with kinit if you can get a ticket from kerberos server for 
amshbase/mst03.corp@corp.com and 
amshbasemaster/mst03.corp@corp.com





From: Shaik M 
Sent: Wednesday, November 11, 2015 12:15 PM
To: user@ambari.apache.org
Subject: Unable to Start Ambari Metrics Collector

Hi,

I have changed my  Ambari Metrics from embedded to distributed environment on 
my secure cluster.

After I have tried to start the Ambari Metrics Collector, unable to start the 
service. I have tried to debug the issue, but i couldn't able to resolve the 
issue.

I have verified all the keytab files, all are able to initiate. and I saw the 
log in "ambari-metrics-collector.log"

2015-11-11 08:50:18,432 WARN org.apache.hadoop.hbase.ipc.AbstractRpcClient: 
Couldn't setup connection for 
amshbase/mst03.corp@corp.com to 
amshbasemaster/mst03.corp@corp.com

Zookeeper:
2015-11-11 08:49:51,628 INFO  [ProcessThread(sid:0 cport:-1):] 
server.PrepRequestProcessor: Got user-level KeeperException when processing 
sessionid:0x150f5bc9262 type:create cxid:0x16 zxid:0xc8 txntype:-1 
reqpath:n/a Error Path:/ams-hbase-secure/tokenauth Error:KeeperErrorCode = 
NoNode for /ams-hbase-secure/tokenauth
2015-11-11 08:49:52,025 INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:61181] 
server.NIOServerCnxnFactory: Accepted socket connection from 
/10.192.149.194:58751
2015-11-11 08:49:52,030 WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:61181] 
server.NIOServerCnxn: caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x0, 
likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:745)

Could you please help me to bring it back Ambari Metrics Collector service. 
Please find the attached AMS logs.

Thanks,
Shaik


Re: Unable to Start Ambari Metrics Collector

2015-11-11 Thread Dmitry Sen
?Logs?


From: Shaik M <munna.had...@gmail.com>
Sent: Wednesday, November 11, 2015 4:27 PM
To: user@ambari.apache.org
Subject: Re: Unable to Start Ambari Metrics Collector

Hi,

I have created required keytabs for AMS.

ams.collector.keytab
ams-hbase.master.keytab
ams-hbase.regionserver.keytab
ams-zk.service.keytab


All keytabs are able to "kinit" without any issue. Not able to trace out the 
issue.

Please help us to trace this issue.

Thanks,
Shaik

On 11 November 2015 at 20:54, Dmitry Sen 
<d...@hortonworks.com<mailto:d...@hortonworks.com>> wrote:

Hi,


In the RS log:

org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth 
for /ams-hbase-secure/running


1. Do you have ams keytabs at /etc/security/keytabs ?

If no, create and put keytabs to /etc/security/keytabs (see principal names and 
keytabs paths at ams-hbase-security-site config) The easiest way is to disable 
security for the cluster and enable it again, Security Wizard will create all 
the required keytabs for you.

2. Check with kinit if you can get a ticket from kerberos server for 
amshbase/mst03.corp@corp.com<mailto:mst03.corp@corp.com> and 
amshbasemaster/mst03.corp@corp.com<mailto:mst03.corp@corp.com>





From: Shaik M <munna.had...@gmail.com<mailto:munna.had...@gmail.com>>
Sent: Wednesday, November 11, 2015 12:15 PM
To: user@ambari.apache.org<mailto:user@ambari.apache.org>
Subject: Unable to Start Ambari Metrics Collector

Hi,

I have changed my  Ambari Metrics from embedded to distributed environment on 
my secure cluster.

After I have tried to start the Ambari Metrics Collector, unable to start the 
service. I have tried to debug the issue, but i couldn't able to resolve the 
issue.

I have verified all the keytab files, all are able to initiate. and I saw the 
log in "ambari-metrics-collector.log"

2015-11-11 08:50:18,432 WARN org.apache.hadoop.hbase.ipc.AbstractRpcClient: 
Couldn't setup connection for 
amshbase/mst03.corp@corp.com<mailto:mst03.corp@corp.com> to 
amshbasemaster/mst03.corp@corp.com<mailto:mst03.corp@corp.com>

Zookeeper:
2015-11-11 08:49:51,628 INFO  [ProcessThread(sid:0 cport:-1):] 
server.PrepRequestProcessor: Got user-level KeeperException when processing 
sessionid:0x150f5bc9262 type:create cxid:0x16 zxid:0xc8 txntype:-1 
reqpath:n/a Error Path:/ams-hbase-secure/tokenauth Error:KeeperErrorCode = 
NoNode for /ams-hbase-secure/tokenauth
2015-11-11 08:49:52,025 INFO  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:61181<http://0.0.0.0/0.0.0.0:61181>] 
server.NIOServerCnxnFactory: Accepted socket connection from 
/10.192.149.194:58751<http://10.192.149.194:58751>
2015-11-11 08:49:52,030 WARN  
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:61181<http://0.0.0.0/0.0.0.0:61181>] 
server.NIOServerCnxn: caught end of stream exception
EndOfStreamException: Unable to read additional data from client sessionid 0x0, 
likely client has closed socket
at org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228)
at 
org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208)
at java.lang.Thread.run(Thread.java:745)

Could you please help me to bring it back Ambari Metrics Collector service. 
Please find the attached AMS logs.

Thanks,
Shaik



Re: Ambari commandScript running on ambari server itself

2015-11-11 Thread Dmitry Sen
Hi,


You can

1. Define a new master component for your service in metainfo.xml

2. Create for your stack role_command_order.json like 
stacks/HDP/2.3/role_command_order.json? , add here dependency

"YOUR_OTHER_COMPONENTS-START" : ["YOUR_MASTER_COMPONENT-START"],

or

"YOUR_OTHER_COMPONENTS-INSTALL" : ["YOUR_MASTER_COMPONENT-START"],?

3. Implement certificates generation logic at 
YOUR_SERVICE/1.0/scripts/master_component.py . The certificates should be 
located under /var/lib/ambari-server/resources/ , it's a shared directory by 
ambari server:8080

4. Install ambari agent on the Ambari Server node


BR,

Dmytro Sen


From: Henning Kropp 
Sent: Wednesday, November 11, 2015 11:21 PM
To: user@ambari.apache.org
Subject: Re: Ambari commandScript running on ambari server itself

Hi,

maybe 
/var/lib/ambari-server/resources/custom_action_definitions/system_action_definitions.xml
 and /var/lib/ambari-server/resources/custom_actions/scripts could work for 
your purposes?

Regards


 On Mi, 11 Nov 2015 19:27:04 +0100 Constantine Yarovoy  
wrote 

Hi!

I am developing my own custom app stack and bunch of services for Ambari (not 
related to HDP).
And I have a specific requirement - run some code locally on Ambari server 
right after deployment process starts but before any service components are 
installed on chosen hosts.
This "local" code is supposed to do 2 operations:
1) generate some files (e.g. certificates/tokens)
2) that are to be copied/distributed to all master hosts.

Something similar can be found in Ansible and it's named "local-action".

Is there any recommended way to declare such local command script that runs on 
Ambari Server itself ? I mean, of course, I have 2 options for desired 
automation, but they all "suck", I think:

- I can code it in Ansible or bash and run prior to cluster deployment
- I can code it in inside master commandScript's install function with 
Execute("generate.sh > generate.out && scp generate.out 
user@master-host-1:/home/user/").

But it'd be much better if Ambari had such functionality and I could have done 
all operations from just a single Ambari Web Ui.

Any suggestions ?



Re: metrics visible from host_component but not component

2015-11-10 Thread Dmitry Sen
Hi,

AMS doesn't support custom services metrics yet, but I can propose some points 
to check

Try to call
curl -X GET -u admin:admin 
"http://dn01:6188/ws/v1/timeline/metrics?metricNames=gpfs.disk_used=gpfs;

If you haven't get the metrics, then something to be fixed on AMS side. 
Otherwise:

When you do
curl -X GET -u admin:admin 
"http://dn01:8080/api/v1/clusters/nate/services/GPFS/components/GPFS_MASTER?fields=metrics/gpfs/disk_used;

Ambari calls
curl -X GET -u admin:admin 
"http://dn01:6188/ws/v1/timeline/metrics?metricNames=gpfs.disk_used=gpfs_master;

But as I can see in your previous message, appId is "gpfs".
{"metrics":[{"timestamp":1447084964323,"metricname":"gpfs.disk_used","appid":"gpfs","hostname":"dn01-dat.ibm.com","starttime":1447084964,"metrics":{"1447084964":1437696.0}}]}

Your custom application should report appId as gpfs_master, but not gpfs. 
Another option is to rename GPFS_MASTER component to GPFS in metainfo.xml

BR,
Dmytro Sen
​

​​


From: Nathan Falk 
Sent: Tuesday, November 10, 2015 6:37 PM
To: user@ambari.apache.org
Subject: metrics visible from host_component but not component


I have a custom Ambari service, with a metrics.json and widgets.json defined.

The widgets display on the service dashboard summary page, but instead of the 
graph or data, I see "n/a".

When I use the REST API to query the ambari server, I see the metrics for the 
host_component, but not when I query the component.

In metrics.json, I've added some of the basic ams host metrics, plus some 
service-specific metrics. All metrics are defined in both "Component" and 
"HostComponent". As an example:

{
  "GPFS_MASTER": {
"Component": [
  {
"type": "ganglia",
"metrics": {
  "default": {
"metrics/cpu/cpu_idle":{
  "metric":"cpu_idle",
  "pointInTime":true,
  "temporal":true,
  "amsHostMetric":true
},
...
"metrics/gpfs/disk_used": {
  "metric": "gpfs.disk_used",
  "pointInTime": true,
  "temporal": true
},
...
  }
}
  }
],
"HostComponent": [
  {
"type": "ganglia",
"metrics": {
  "default": {
"metrics/cpu/cpu_idle":{
  "metric":"cpu_idle",
  "pointInTime":true,
  "temporal":true,
  "amsHostMetric":true
},
...
"metrics/gpfs/disk_used": {
  "metric": "gpfs.disk_used",
  "pointInTime": true,
  "temporal": true
},
...


I query the AMS Collector, and it seems that the metrics are there:
[root@dn01-dat nathan]# curl -X GET -u admin:admin 
"http://dn01:6188/ws/v1/timeline/metrics?metricNames=gpfs.disk_used=dn01-dat.ibm.com;
{"metrics":[{"timestamp":1447084964323,"metricname":"gpfs.disk_used","appid":"gpfs","hostname":"dn01-dat.ibm.com","starttime":1447084964,"metrics":{"1447084964":1437696.0}}]}

I query Ambari, and whether I see the metric or not depends on how I do the 
query. If I query the GPFS_MASTER service component, I do NOT see the metric:
[root@dn01-dat nathan]# curl -X GET -u admin:admin 
"http://dn01:8080/api/v1/clusters/nate/services/GPFS/components/GPFS_MASTER?fields=metrics/gpfs/disk_used;
{
  "href" : 
"http://dn01:8080/api/v1/clusters/nate/services/GPFS/components/GPFS_MASTER?fields=metrics/gpfs/disk_used;,
  "ServiceComponentInfo" : {
"cluster_name" : "nate",
"component_name" : "GPFS_MASTER",
"service_name" : "GPFS"
  }
}

If I query the GPFS_MASTER host component on dn01, then I do see the metric:
[root@dn01-dat nathan]# curl -X GET -u admin:admin 
"http://dn01:8080/api/v1/clusters/nate/hosts/dn01-dat.ibm.com/host_components/GPFS_MASTER?fields=metrics/gpfs/disk_used;
{
  "href" : 
"http://dn01:8080/api/v1/clusters/nate/hosts/dn01-dat.ibm.com/host_components/GPFS_MASTER?fields=metrics/gpfs/disk_used;,
  "HostRoles" : {
"cluster_name" : "nate",
"component_name" : "GPFS_MASTER",
"host_name" : "dn01-dat.ibm.com"
  },
  "host" : {
"href" : "http://dn01:8080/api/v1/clusters/nate/hosts/dn01-dat.ibm.com;
  },
  "metrics" : {
"gpfs" : {
  "disk_used" : 1437696.0
}
  }
}

By comparison, if I query the "cpu_idle" metric, also defined in the GPFS 
metrics.json file, I see the metric in both queries:
[root@dn01-dat nathan]# curl -X GET -u admin:admin 
"http://dn01:8080/api/v1/clusters/nate/services/GPFS/components/GPFS_MASTER?fields=metrics/cpu/cpu_idle;
{
  "href" : 
"http://dn01:8080/api/v1/clusters/nate/services/GPFS/components/GPFS_MASTER?fields=metrics/cpu/cpu_idle;,
  "ServiceComponentInfo" : {
"cluster_name" : "nate",
"component_name" : "GPFS_MASTER",
"service_name" : "GPFS"
  },
  "metrics" : {
"cpu" : {
  "cpu_idle" : 0.6248046875
}
  }
}[root@dn01-dat nathan]#
[root@dn01-dat nathan]# curl -X 

Re: Flume service on ambari 2.1.2 doesn't show any metrics

2015-10-29 Thread Dmitry Sen
Hi,


Agent specific graphs aren't shown for 2.1.2, 
https://issues.apache.org/jira/browse/AMBARI-13336 ?is fixed for 2.1.3 release.


There is no Ambari Metrics support for non-hadoop services, yet.

?


From: saloni udani 
Sent: Thursday, October 29, 2015 12:12 PM
To: user@ambari.apache.org
Subject: Flume service on ambari 2.1.2 doesn't show any metrics

Hi all,

I am running Ambari 2.1.2 with some hadoop services ,Amabri Metrics and 
Flume.Flume service doesn't show any metrics although the data is processed and 
ingested. it always stats "No data available .The Ambari Metrics service may be 
uninstalled or inaccessible". Also /usr/lib/ambari-metrics-flume-sink is 
missing. Does this have to do anything with 
https://issues.apache.org/jira/browse/AMBARI-13336 ?

If not then what are the steps to configure Ambari Metrics work for non-hadoop 
service?


Regards


Re: Stuck in installation after upgrade

2015-10-23 Thread Dmitry Sen
Hi,


If you haven't finished deployment yet, the best decision is to start deploying 
the hadoop cluster from the scratch.

Just execute

ambari-server stop

ambari-server reset

ambari-server start ?


Only Ambari DB will be cleaned up, nodes won't install rpm packages again.



From: Kaliyug Antagonist 
Sent: Friday, October 23, 2015 6:40 PM
To: user@ambari.apache.org
Subject: Stuck in installation after upgrade

I was trying to install HDP 2.3 using Ambari 2.1.0.

I got several errors like :


resource_management.core.exceptions.Fail: Execution of 'conf-select 
set-conf-dir --package pig --stack-version 2.3.0.0-2557 --conf-version 0' 
returned 1. pig not installed or incorrect package name


I found that the latest Ambari version has these issues fixed, hence, I stopped 
the Ambari server and agents and I upgraded to Ambari 2.1.2


Now when I log-in the web interface, under 'Clusters', I see my cluster with 
'Cluster creation in progress'


but when I click on 'Cluster creation in progress'I 
Just get 'Metrics', 'Heatmaps' and 'Cnfig History' tabs.


How do I proceed with the installation ? Do I have to clean the cluster again 
???


Re: Stuck in installation after upgrade

2015-10-23 Thread Dmitry Sen
Yes.


From: Kaliyug Antagonist <kaliyugantagon...@gmail.com>
Sent: Friday, October 23, 2015 7:12 PM
To: user@ambari.apache.org
Subject: Re: Stuck in installation after upgrade


I had reached the stage where the installation begins and if there are errors 
or warnings, these are shown with the 'Next' button disabled. Then I simply 
logged out, shutdown ambari server and agents, upgraded and restarted Ambari.
Now will the reset approach work ?

On Oct 23, 2015 17:48, "Dmitry Sen" 
<d...@hortonworks.com<mailto:d...@hortonworks.com>> wrote:

Hi,


If you haven't finished deployment yet, the best decision is to start deploying 
the hadoop cluster from the scratch.

Just execute

ambari-server stop

ambari-server reset

ambari-server start ?


Only Ambari DB will be cleaned up, nodes won't install rpm packages again.



From: Kaliyug Antagonist 
<kaliyugantagon...@gmail.com<mailto:kaliyugantagon...@gmail.com>>
Sent: Friday, October 23, 2015 6:40 PM
To: user@ambari.apache.org<mailto:user@ambari.apache.org>
Subject: Stuck in installation after upgrade

I was trying to install HDP 2.3 using Ambari 2.1.0.

I got several errors like :


resource_management.core.exceptions.Fail: Execution of 'conf-select 
set-conf-dir --package pig --stack-version 2.3.0.0-2557 --conf-version 0' 
returned 1. pig not installed or incorrect package name


I found that the latest Ambari version has these issues fixed, hence, I stopped 
the Ambari server and agents and I upgraded to Ambari 2.1.2


Now when I log-in the web interface, under 'Clusters', I see my cluster with 
'Cluster creation in progress<http://l1031lab:8080/#/>'


but when I click on 'Cluster creation in progress<http://l1031lab:8080/#/>'I 
Just get 'Metrics', 'Heatmaps' and 'Cnfig History' tabs.


How do I proceed with the installation ? Do I have to clean the cluster again 
???


Re: Change the tmp directory

2015-10-22 Thread Dmitry Sen
Hi,


There few possible problems with /tmp directory:

- it's cleaned up after reboot. All the services keep this in mind and never 
use any files in tmp directory after their restart, so no need to keep these 
files after server reboot

- /tmp might be mounted to a small/slow partition. That's the reason for 
thinking about updating the configuration.


1. /tmp is a good choice :) Your tmp-dont-del should be on a fast and 
preferably large partition

2. It's ok, but the service accounts must have RW access to the corresponding 
directories. If you set hive.exec.scratchdir=/mnt/fastdrive/tmp-dont-del/hive , 
you have to make sure, that hive user has RW permissions to 
/mnt/fastdrive/tmp-dont-del/hive


BR,

Dmytro Sen



From: Kaliyug Antagonist 
Sent: Thursday, October 22, 2015 2:21 PM
To: user@ambari.apache.org
Subject: Change the tmp directory

I am planning to install HDP 2.3 using Ambari 2.1.0.

I have installed Ambari using the root privileges.

In the configurations, I found several places where the path is /tmp/*.

e.g:
hive.exec.scratchdir/tmp/hive
HBase tmp directory /tmp/hbase-${user.name}
dev.zookeeper.path  /tmp/dev-storm-zookeeper

I wish to replace the default /tmp path with some custom temp directory(say 
'tmp-dont-del')


  1.  Which is the recommended directory path for the new tmp ?
  2.  Is it fine to create this new temp. directory using root credentials ?


Re: Store NN and DN on different disks

2015-10-16 Thread Dmitry Sen
​Hi


There is hdfs-site property dfs.datanode.data.dir shown as "DataNode 
directories" by Ambari with description


"Determines where on the local filesystem an DFS data node should store its 
blocks.  If this is a comma-delimited list of directories, then data will be 
stored in all named directories, typically on different devices. Directories 
that do not exist are ignored.​"


So you need to set ​

/opt/dev/sdb,/opt/dev/sdc,/opt/dev/sdd,/opt/dev/sde,/opt/dev/sdf,/opt/dev/sdg,/opt/dev/sdh

as value for this property.


BR,

Dmytro Sen



From: Kaliyug Antagonist 
Sent: Friday, October 16, 2015 1:00 PM
To: user@ambari.apache.org
Subject: Store NN and DN on different disks

I have 9 nodes and I have started to install HDP 2.3 using Ambari 2.1.0.

*Objective : Use ONE disk for namenode, metadata etc. and rest of the disks for 
storing the HDFS data blocks

Node-1 : 7 disks(1 for root, opt etc., 6 empty)

df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/vg00-root
   16G  2.8G   13G  19% /
tmpfs  24G 0   24G   0% /dev/shm
/dev/sdb1 194M   58M  127M  32% /boot
/dev/mapper/vg00-home
   16G   11M   15G   1% /home
/dev/mapper/vg00-nsr   16G  2.4G   13G  16% /nsr
/dev/mapper/vg00-opt   16G  260M   15G   2% /opt
/dev/mapper/vg00-itm  434M  191M  222M  47% /opt/IBM/ITM
/dev/mapper/vg00-tmp   16G   70M   15G   1% /tmp
/dev/mapper/vg00-usr   16G  2.0G   13G  14% /usr
/dev/mapper/vg00-usr_local
  248M  231M  4.4M  99% /usr/local
/dev/mapper/vg00-var   16G  4.6G   11G  31% /var
/dev/mapper/vg00-tq   3.0G  974M  1.9G  34% /opt/teamquest
AFS   8.6G 0  8.6G   0% /afs
/dev/sdc1 551G  198M  522G   1% /opt/dev/sdc
/dev/sdd1 551G  198M  522G   1% /opt/dev/sdd
/dev/sde1 551G  198M  522G   1% /opt/dev/sde
/dev/sdf1 551G  198M  522G   1% /opt/dev/sdf
/dev/sdg1 551G  198M  522G   1% /opt/dev/sdg
/dev/sdh1 551G  198M  522G   1% /opt/dev/sdh

Node-2 to Node-9 : 8 disks(1 for root, opt etc., 7 empty)

df -h
FilesystemSize  Used Avail Use% Mounted on
/dev/mapper/vg00-root
   16G  405M   15G   3% /
tmpfs  24G 0   24G   0% /dev/shm
/dev/sda1 194M   58M  126M  32% /boot
/dev/mapper/vg00-home
   16G   11M   15G   1% /home
/dev/mapper/vg00-nsr   16G  2.4G   13G  17% /nsr
/dev/mapper/vg00-opt   16G   35M   15G   1% /opt
/dev/mapper/vg00-itm  434M  191M  221M  47% /opt/IBM/ITM
/dev/mapper/vg00-tmp   16G   70M   15G   1% /tmp
/dev/mapper/vg00-usr   16G  1.9G   14G  13% /usr
/dev/mapper/vg00-usr_local
  248M   11M  226M   5% /usr/local
/dev/mapper/vg00-var   16G  1.8G   14G  12% /var
/dev/mapper/vg00-tq   3.0G  946M  1.9G  33% /opt/teamquest
AFS   8.6G 0  8.6G   0% /afs
/dev/sdb1 551G  215M  522G   1% /opt/dev/sdb
/dev/sdc1 551G  328M  522G   1% /opt/dev/sdc
/dev/sdd1 551G  215M  522G   1% /opt/dev/sdd
/dev/sde1 551G  198M  522G   1% /opt/dev/sde
/dev/sdf1 551G  198M  522G   1% /opt/dev/sdf
/dev/sdg1 551G  327M  522G   1% /opt/dev/sdg
/dev/sdh1 551G  243M  522G   1% /opt/dev/sdh


In 'Assign Masters'

Node-1 Namenode, Zookeeper server
Node-2 SNN, RM, Zookeeper server, History Server
Node-3 WebHCat Server, HiveServer2, Hive Metastore, HBase Master, Oozie Server, 
Zookeeper server
Node-4 Kafka, Accumulo Master etc.
Node-5 Falcon, Knox etc.

In 'Assign Slaves and Clients', nothing selected for Node-1, rest 8 having 
uniform clients, NodeManager, Regionserver etc.

I AM STUCK IN the 'Customize Services', FOR EXAMPLE :

Under 'HDFS/Namenode directories', the defaults are :

/nsr/hadoop/hdfs/namenode
/opt/hadoop/hdfs/namenode
/opt/IBM/ITM/hadoop/hdfs/namenode
/tmp/hadoop/hdfs/namenode
/usr/hadoop/hdfs/namenode
/usr/local/hadoop/hdfs/namenode
/var/hadoop/hdfs/namenode
/opt/teamquest/hadoop/hdfs/namenode
/dev/isilon/hadoop/hdfs/namenode
/opt/dev/sdc/hadoop/hdfs/namenode
/opt/dev/sdd/hadoop/hdfs/namenode
/opt/dev/sde/hadoop/hdfs/namenode
/opt/dev/sdf/hadoop/hdfs/namenode
/opt/dev/sdg/hadoop/hdfs/namenode
/opt/dev/sdh/hadoop/hdfs/namenode

Under 'DataNode directories', the defaults are :

/nsr/hadoop/hdfs/data
/opt/hadoop/hdfs/data
/opt/IBM/ITM/hadoop/hdfs/data
/tmp/hadoop/hdfs/data
/usr/hadoop/hdfs/data
/usr/local/hadoop/hdfs/data
/var/hadoop/hdfs/data
/opt/teamquest/hadoop/hdfs/data
/dev/isilon/hadoop/hdfs/data
/opt/dev/sdb/hadoop/hdfs/data
/opt/dev/sdc/hadoop/hdfs/data
/opt/dev/sdd/hadoop/hdfs/data
/opt/dev/sde/hadoop/hdfs/data
/opt/dev/sdf/hadoop/hdfs/data
/opt/dev/sdg/hadoop/hdfs/data
/opt/dev/sdh/hadoop/hdfs/data

To achieve the *Objective I mentioned in the beginning, what values shall I put 
and in which all places?

I thought of replacing 'DataNode directories' defaults 

Re: Unable to set the namenode options using blueprints

2015-10-13 Thread Dmitry Sen
That's not an Ambari doc, but you are using ambari to deploy the cluster.


/etc/hadoop/conf/hadoop-env.sh is generated from the template, which is 
"content" property in hadoop-env ambari config, + other properties listed in 
/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/configuration/hadoop-env.xml?





From: Stephen Boesch <java...@gmail.com>
Sent: Tuesday, October 13, 2015 12:39 PM
To: user@ambari.apache.org
Subject: Re: Unable to set the namenode options using blueprints

Hi Dmitry,
That doe not appear to be correct.

>From Hortonworks own documentation for the lastest 2.3.0:

http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_installing_manually_book/content/ref-80953924-1cbf-4655-9953-1e744290a6c3.1.html


If the cluster uses a Secondary NameNode, you should also set 
HADOOP_SECONDARYNAMENODE_OPTS to HADOOP_NAMENODE_OPTS in the hadoop-env.sh file:

HADOOP_SECONDARYNAMENODE_OPTS=$HADOOP_NAMENODE_OPTS

Another useful HADOOP_NAMENODE_OPTS setting is -XX:+HeapDumpOnOutOfMemoryError. 
This option specifies that a heap dump should be executed when an out of memory 
error occurs. You should also use -XX:HeapDumpPath to specify the location for 
the heap dump file. For example:







2015-10-13 2:29 GMT-07:00 Dmitry Sen 
<d...@hortonworks.com<mailto:d...@hortonworks.com>>:

hadoop-env has no property HADOOP_NAMENODE_OPTS?, you should use 
namenode_opt_maxnewsize for specifying XX:MaxHeapSize?

  "hadoop-env" : {
"properties" : {
   "namenode_opt_maxnewsize" :  "16384m"
}
  }


You may also want to check all available options in 
/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/configuration/hadoop-env.xml?



From: Stephen Boesch <java...@gmail.com<mailto:java...@gmail.com>>
Sent: Tuesday, October 13, 2015 9:41 AM
To: user@ambari.apache.org<mailto:user@ambari.apache.org>
Subject: Unable to set the namenode options using blueprints

Given a blueprint that includes the following:

  "hadoop-env" : {
"properties" : {
   "HADOOP_NAMENODE_OPTS" :  " -XX:InitialHeapSize=16384m 
-XX:MaxHeapSize=16384m -Xmx16384m -XX:MaxPermSize=512m"
}
  }

The following occurs when creating the cluster:

Error occurred during initialization of VM
Too small initial heap

The logs say:

CommandLine flags: -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log 
-XX:InitialHeapSize=1024 -XX:MaxHeapSize=1024 -XX:MaxNewSize=200 
-XX:MaxTenuringThreshold=6 -XX:NewSize=200 -XX:OldPLABSize=16 
-XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node"
 
-XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node"
 
-XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node"
 -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps 
-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCompressedClassPointers 
-XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC

Notice that nowhere are the options provided included in the actual jvm 
launched values.


it is no wonder the low on resources given the only 1GB MaxHeapSize. totally 
inadequate for namenode.

btw this is HA - and both of the namenodes have same behavior.






Re: Removing hard-coded configuration keys for services

2015-09-30 Thread Dmitry Sen
Another option is using configs.sh script


# /var/lib/ambari-server/resources/scripts/configs.sh
Usage: configs.sh [-u userId] [-p password] [-port port] [-s]  
   [CONFIG_FILENAME | CONFIG_KEY 
[CONFIG_VALUE]]

   [-u userId]: Optional user ID to use for authentication. Default is 
'admin'.
   [-p password]: Optional password to use for authentication. Default is 
'admin'.
   [-port port]: Optional port number for Ambari server. Default is '8080'. 
Provide empty string to not use port.
   [-s]: Optional support of SSL. Default is 'false'. Provide empty string 
to not use SSL.
   : One of 'get', 'set', 'delete'. 'Set' adds/updates as necessary.
   : Server external host name
   : Name given to cluster. Ex: 'c1'
   : One of the various configuration types in Ambari. 
Ex:global, core-site, hdfs-site, mapred-queue-acls, etc.
   [CONFIG_FILENAME]: File where entire configurations are saved to, or 
read from. Only applicable to 'get' and 'set' actions
   [CONFIG_KEY]: Key that has to be set or deleted. Not necessary for 'get' 
action.
   [CONFIG_VALUE]: Optional value to be set. Not necessary for 'get' or 
'delete' actions.

Something like
configs.sh delete localhost myclustername config-name 
spark.yarn.applicationMaster.waitTries?

?


From: Olivier Renault 
Sent: Wednesday, September 30, 2015 6:10 PM
To: user@ambari.apache.org; user@ambari.apache.org
Subject: Re: Removing hard-coded configuration keys for services


You can remove a service using the API. You'll find the steps at the following 
url.

https://cwiki.apache.org/confluence/display/AMBARI/Using+APIs+to+delete+a+service+or+all+host+components+on+a+host

Thanks,
Olivier



On Wed, Sep 30, 2015 at 7:46 AM -0700, "Eirik Thorsnes" 
> wrote:

Hi,

I've a cluster with Ambari 2.1.1 and HDP 2.3 stack. I've upgraded the
Spark service manually to version 1.5.x from 1.3.

I want to remove the (now deprecated) configuration key
"spark.yarn.applicationMaster.waitTries" and replace it with another key.

Since the key is missing the delete button in the ambari-interface, how
can I remove it? Do I have to do it manually in the db, or are there
better ways?

Regards,
Eirik

--
Eirik Thorsnes



Re: Couldn't change value of kafka.metrics.reporters

2015-09-23 Thread Dmitry Sen
Is looks like a bug, whatever you set as value for kafka.metrics.reporters , 
it's overridden by ambari agent

/var/lib/ambari-agent/cache/common-services/KAFKA/0.8.1.2.2/package/scripts/kafka.py:57

kafka_server_config['kafka.metrics.reporters'] = 
params.kafka_metrics_reporters

?
As a hack , you can try to set your value in params.py on all hosts
/var/lib/ambari-agent/cache/common-services/KAFKA/0.8.1.2.2/package/scripts/params.py:103
  kafka_metrics_reporters = kafka_metrics_reporters + 
"org.apache.hadoop.metrics2.sink.kafka.KafkaTimelineMetricsReporter"



?


From: Alejandro Fernandez 
Sent: Wednesday, September 23, 2015 6:32 PM
To: user@ambari.apache.org; Sriharsha Chintalapani; coool...@gmail.com
Subject: Re: Couldn't change value of kafka.metrics.reporters



From: Alejandro Fernandez 
>
Date: Wednesday, September 23, 2015 at 8:32 AM
To: "user@ambari.apache.org" 
>, Sriharsha Chintalapani 
>
Subject: Re: Couldn't change value of kafka.metrics.reporters

cc Sriharsha

From: Darwin Wirawan >
Reply-To: "user@ambari.apache.org" 
>
Date: Wednesday, September 23, 2015 at 1:20 AM
To: "user@ambari.apache.org" 
>
Subject: Couldn't change value of kafka.metrics.reporters

Hello Ambari Team,

I'm using ambari 2.0,  and now I'm trying to configure Kafka so she could send 
metric via kafka-graphite. I'm new to monitoring or metrics thing, I tried to 
figure out what is metrics.reporter?

And also, why I can't change kafka.metrics.reporters in my kafka configuration? 
I changed it via ambari but when I check kafka.properties, the value still in 
default (org.apache.hadoop.metrics2.sink.kafka.KafkaTimelineMetricsReporter).

Is there any way so I can change my kafka.metrics.reporters?

Another question, I found my kafka has some metrics, I guess my Ambari Metrics 
is running OK. Can I pass that metrics to graphite? Or somehow change the 
destination to my graphite server?

Thanks,
Darwin


Re: debian and/or rpm packages for Ambari 2.x

2015-09-03 Thread Dmitry Sen
The Ambari 2.0.2 deb packages are available, see

https://cwiki.apache.org/confluence/display/AMBARI/Install+Ambari+2.0.2+from+Public+Repositories?

But they are tested on Ubuntu 12, not sure about Debian.


For CentOS and Suse RPM packages are available for newer Ambari 2.1.1

https://cwiki.apache.org/confluence/display/AMBARI/Install+Ambari+2.1.1+from+Public+Repositories


BR,

Dmytro Sen



From: Krishnanand Khambadkone 
Sent: Wednesday, September 02, 2015 9:02 PM
To: user@ambari.apache.org
Subject: debian and/or rpm packages for Ambari 2.x

Hi,  Does anyone know if there are downloadable debian/rpm packages available 
for Ambari 2.x



Re: Is it possible to point to an existing ZooKeeper cluster

2015-09-02 Thread Dmitry Sen

Ambari installer forces you to deploy Zookeeper packages to at least 1 node, 
but you can change other services configs to point to another, already 
configured Zookeper cluster.

Ambari doesn't support managing existing components, deployed not by Ambari. So 
you won't be able to monitor/start/stop/configure existing zookeeper cluster 
with Ambari





From: sigurd.knippenb...@gmail.com  on behalf of 
Sigurd Knippenberg 
Sent: Wednesday, September 02, 2015 7:10 PM
To: user@ambari.apache.org
Subject: Is it possible to point to an existing ZooKeeper cluster

We already have a ZooKeeper cluster configured in our company. If I install HDP 
using Ambari, is it possible to not install a second ZooKeeper cluster but just 
point to the existing ZooKeeper cluster instead?

Thanks,

Sigurd


Re: Configure NFS gateway in Ambari 2.1.1

2015-08-31 Thread Dmitry Sen
+ NFS Gateway support was added in HDP2.3 . So if your HDP version is 2.0-2.2 , 
it has to be upgraded as well

From: Jeff Sposetti 
Sent: Monday, August 31, 2015 8:00 PM
To: user@ambari.apache.org
Subject: Re: Configure NFS gateway in Ambari 2.1.1

Once HDFS is installed (during initial install or via Add Service), to add
the NFS Gateway component to a Host, go the Host page and click ³+Add" to
see it in the choice of components to add to that host.






On 8/31/15, 12:55 PM, "Eirik Thorsnes"  wrote:

>Hi all,
>
>After upgrading Ambari with manually setup nfs-gateway to Ambari 2.1.1
>(which, as I understand it, supports managing NFS-gw) I cannot find a
>way to specify which host shall have the NFS-gw setup - and other needed
>configuration so that Ambari controls the nfs-gw.
>
>During the Add-service wizard it appears that you should be able to
>select the nfs-gateway during initial install.
>
>Any pointers to e.g. API I need to use to enable it?
>
>Regards,
>Eirik
>
>--
>Eirik Thorsnes
>
>



Re: Issue with deploying Cluster through Ambari Agent behind proxy

2015-08-20 Thread Dmitry Sen
Hi,


There are 2 options

1. Setup apt to use proxy on all agents (not sure if it works is your local DNS 
doesn't resolve public hostnames)

echo 'Acquire::http::Proxy http://proxy_host:proxy_port/;'  
/etc/apt/apt.conf

2. Create a local copy of repositories, required for installing Hadoop Stack

http://docs.hortonworks.com/HDPDocuments/Ambari-2.1.0.0/bk_Installing_HDP_AMB/content/_using_a_local_repository.html




From: Konrad Hübner konrad.hueb...@gmail.com
Sent: Thursday, August 20, 2015 10:20 AM
To: user@ambari.apache.org
Subject: Fwd: Issue with deploying Cluster through Ambari Agent behind proxy

Hi,

I try to deploy a HDP 2.3 Cluster with Ambari 2.1. By force I am behind a proxy 
server. The proxy is configured both systemwide in /etc/env, including export 
of environment variables for sudoers and in /etc/apt/apt.conf.d with 
Accquire::proxy command. Ambari Server was configured to use a non-root users 
with sudo privilege. Installing ambari client worked, but when starting the 
deployment of the cluster through the agent, the agent runs apt-get install and 
that failes when resolving the hostname. This looks like it is not using the 
proxy. Please find the log output below. Any help is appreciated!

Thanks
-Konrad


2015-08-20 06:53:13,299 - Package['unzip'] {}
2015-08-20 06:53:13,310 - Installing package unzip ('/usr/bin/apt-get -q -o 
Dpkg::Options::=--force-confdef --allow-unauthenticated --assume-yes install 
unzip')
2015-08-20 06:53:13,514 - Execution of '['/usr/bin/apt-get', '-q', '-o', 
'Dpkg::Options::=--force-confdef', '--allow-unauthenticated', '--assume-yes', 
'install', 'unzip']' returned 100. Reading package lists...
Building dependency tree...
Reading state information...
Suggested packages:
  zip
The following NEW packages will be installed:
  unzip
0 upgraded, 1 newly installed, 0 to remove and 7 not upgraded.
Need to get 157 kB of archives.
After this operation, 391 kB of additional disk space will be used.
Err http://archive.ubuntu.com/ubuntu/ trusty-updates/main unzip amd64 
6.0-9ubuntu1.3
  Could not resolve 'archive.ubuntu.comhttp://archive.ubuntu.com'
Err http://security.ubuntu.com/ubuntu/ trusty-security/main unzip amd64 
6.0-9ubuntu1.3
  Could not resolve 'security.ubuntu.comhttp://security.ubuntu.com'
E: Failed to fetch 
http://security.ubuntu.com/ubuntu/pool/main/u/unzip/unzip_6.0-9ubuntu1.3_amd64.deb
  Could not resolve 'security.ubuntu.comhttp://security.ubuntu.com'

E: Unable to fetch some archives, maybe run apt-get update or try with 
--fix-missing?
2015-08-20 06:53:13,514 - Failed to install package unzip. Executing 
`/usr/bin/apt-get update -qq`
2015-08-20 06:53:14,780 - Retrying to install package unzip




Re: How to get process IDs via ambari REST API

2015-07-29 Thread Dmitry Sen
Hi,


Ambari doesn't store the components  process id's, so you can't get them via 
REST API



From: Ehsan Haq ehsan@klarna.com
Sent: Wednesday, July 29, 2015 10:36 AM
To: user@ambari.apache.org
Subject: How to get process IDs via ambari REST API

Hi,
   I want to know if it is possible to get process IDs of the processes run
by ambari (namenode, datanode, hbase master, region server etc.) via ambari
REST API?

--

Muhammad Ehsan ul Haque
Software Developer
Odin/Data Infra
+46 700 012 731

Klarna AB
Sveavägen 46, 111 34 Stockholm
Tel: +46 8 120 120 00
Reg no: 556737-0431
klarna.comhttp://klarna.com

[https://cdn.klarna.com/1.0/shared/image/generic/logo/global/tagline/horizontal-blue.png?width=424]


Re: Compiling ambari-metrics rpms in git-trunk partially fails

2015-06-12 Thread Dmitry Sen
Hi Eirik,

Ambari Metrics is built by an assembly plugin.

$ cd ambari-metrics
$ mvn package -Dbuild-rpm -DskipTests

You will find the rpm's in ambari-metrics-assembly/target/rpm

BR,
Dmytro Sen



From: Eirik Thorsnes eirik.thors...@uni.no
Sent: Friday, June 12, 2015 4:30 PM
To: user@ambari.apache.org
Subject: Compiling ambari-metrics rpms in git-trunk partially fails

Hi,

I'm trying to compile ambari rpms from git trunk and even when going
down to the ambari-metrics subdir and re-running mvn there, I only get a
proper rpm for the ambari-metrics-collector (in subdir
ambari-metrics-timelineservice). Some other rpms are created but they
don't contain any files.

Is this a known issue?
I see that Hortonworks has proper ambari-metrics rpms for ambari-2.1.0
in their beta/unreleased part (HDP-LABS) in their repo.

If I compile to tar.gz (removing rpm:rpm) I get what looks to be
correct tar.gz for the missing rpms in the ambari-metrics-assembly
subdir (including the correctly compiled ambari-metrics-collector
mentioned above). So it seems that the following rpms are not created
properly: ambari-metrics-hadoop-sink-$(version).rpm and
ambari-metrics-monitor-$(version).rpm.

I compile like this:

(java-home set to openjdk 1.7, mvn 3.0.5 in path,
setuptools-0.6c11-py2.6.egg installed, NodeJS v0.10.31 in path,
brunch@1.7.17)

export _JAVA_OPTIONS=-Xmx3072m -XX:MaxPermSize=512m
-Djava.awt.headless=true

mvn versions:set -DnewVersion=2.1.0.0

cd ambari-metrics
mvn versions:set -DnewVersion=2.1.0.0

cd ..
mvn -B install package rpm:rpm -Dviews -DskipTests -Dpython.ver=python
= 2.6 -Preplaceurl

cd ambari-metrics
mvn -B install package rpm:rpm -Dviews -DskipTests -Dpython.ver=python
= 2.6 -Preplaceurl

cd ../contrib/ambari-log4j/
mvn -B install package rpm:rpm -Dviews -DskipTests -Dpython.ver=python
= 2.6 -Preplaceurl

Maven reports that builds are all success.

Best,
Eirik

--
Eirik Thorsnes


Re: Re:Re: Re:Re: Re:Re: Re:How to get datanode numbers in stack_advisor.py#recommendHDFSConfigurations?

2015-04-21 Thread Dmitry Sen
Hi,


Thanks for discovering this bug, it has been fixed in scope of AMBARI-10602


BR,

Dmytro​


From: Qi Yun Liu amari_liuqi...@163.com
Sent: Tuesday, April 21, 2015 5:24 AM
To: user@ambari.apache.org
Subject: Re:Re: Re:Re: Re:Re: Re:How to get datanode numbers in 
stack_advisor.py#recommendHDFSConfigurations?

Hi Siddharth,

Per your comments, I created a review request for jira AMBARI-10515: 
https://reviews.apache.org/r/33386/.

I am sure I used the correct method #getHostsWithComponent() and not the 
#getHostWithComponent() . In my test I found that now the method 
stack_advisor.py#getHostsWithComponent only returns one host. And then I found 
its code is:
 if (len(components)  0 and 
len(components[0][StackServiceComponents][hostnames])  0):
# component available - determine hosts and memory
componentHostname = 
components[0][StackServiceComponents][hostnames][0]
componentHosts = [host for host in hosts[items] if 
host[Hosts][host_name] == componentHostname]
return componentHosts
We could see that it only return one host by line 'componentHostname = 
components[0][StackServiceComponents][hostnames][0]'.
So, for getting all the hosts, I modified the code to add a for-loop structure 
as below:
  if (len(components)  0 and 
len(components[0][StackServiceComponents][hostnames])  0):
componentHosts = []
for index in 
range(len(components[0][StackServiceComponents][hostnames])):
  # component available - determine hosts and memory
  componentHostname = 
components[0][StackServiceComponents][hostnames][index]
  componentHosts.append([host for host in hosts[items] if 
host[Hosts][host_name] == componentHostname][0])
 return componentHosts
And now, it could return all hosts as expected

Thanks!

At 2015-04-18 02:28:59, Srimanth Gunturi sgunt...@hortonworks.com wrote:

Hi Qi Yun Liu,

On the trunk source #getHostsWithComponent() returns all hosts containing the 
component.


https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py#L211


There is also #getHostWithComponent() which returns the first of those hosts.

https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py#L222


I am hoping the right method is being used.

Also, #recommendHDFSConfigurations() has access to all the component-layout (in 
services variable), and all details about the host (in hosts variable), so you 
can do additional improvements.

https://github.com/apache/ambari/blob/trunk/ambari-server/src/main/resources/stacks/HDP/2.0.6/services/stack_advisor.py#L141


Hope that helps.

Regards,

Srimanth




From: Qi Yun Liu amari_liuqi...@163.commailto:amari_liuqi...@163.com
Sent: Friday, April 17, 2015 4:38 AM
To: user@ambari.apache.orgmailto:user@ambari.apache.org
Subject: Re:Re: Re:Re: Re:How to get datanode numbers in 
stack_advisor.py#recommendHDFSConfigurations?

Hi Siddharth,

Yes, I agree that it makes sense to get only 1 host when we were not supporting 
heterogeneous environments. At present, we might support heterogeneous 
environments, so it also make sense to update this method accordingly. In 
addition, I really need this method to return all hosts and it's also possible 
that others might also have similar requirements. Therefore, I created a patch 
for jira AMBARI-10515 and did some tests to ensure no regression bug is 
brought. Could you please help take a review on it?

Thanks a lot!





At 2015-04-17 00:34:51, Siddharth Wagle 
swa...@hortonworks.commailto:swa...@hortonworks.com wrote:

By we, I meant the stack advisor feature only.


-Sid



From: Siddharth Wagle
Sent: Thursday, April 16, 2015 9:34 AM
To: user@ambari.apache.orgmailto:user@ambari.apache.org
Subject: Re: Re:Re: Re:How to get datanode numbers in 
stack_advisor.py#recommendHDFSConfigurations?


Hi Qi Yun Li,


I believe the 1 host is intended behavior since a the moment we are not 
supporting heterogeneous environments that is what any 1 of candidate hosts is 
chose to represent what cpu / memory / disk characteristics to use for 
recommending configurations for a component.


Srimanth, can attest to this.


BR,

Sid



From: Qi Yun Liu amari_liuqi...@163.commailto:amari_liuqi...@163.com
Sent: Wednesday, April 15, 2015 11:34 PM
To: user@ambari.apache.orgmailto:user@ambari.apache.org
Subject: Re:Re: Re:How to get datanode numbers in 
stack_advisor.py#recommendHDFSConfigurations?

Hi Siddharth,

Thanks a lot for your comments! According to your suggestions, I did a test:
1. Launch Ambari server GUI and start a brand new cluster installation
2. In 'Assign Slaves and Clients' page, select two hosts(hostname0.com, 
hostname1.com) as the datanodes
3. After clicking Next button, I found in the 

Re: Permanent changes to Ganglia config (how to)

2015-03-05 Thread Dmitry Sen
Hi Fabio,


You can change default RRD storage settings? in

common-services/GANGLIA/3.5.0/package/files/gmetadLib.sh? line 120 before 
ganglia server start


If you want to change sampling rate to 5 second, you would have

?

RRAs RRA:AVERAGE:0.5:1:732 RRA:AVERAGE:0.5:24:244 RRA:AVERAGE:0.5:168:244 
RRA:AVERAGE:0.5:672:244 \
 RRA:AVERAGE:0.5:5760:374


instead of


RRAs RRA:AVERAGE:0.5:1:244 RRA:AVERAGE:0.5:24:244 RRA:AVERAGE:0.5:168:244 
RRA:AVERAGE:0.5:672:244 \
 RRA:AVERAGE:0.5:5760:374


RRA:AVERAGE:0.5:1:732? means 732 samples per 1 hour




BR,


Dmytro sen






From: Fabio C. anyte...@gmail.com
Sent: Monday, March 02, 2015 8:41 AM
To: user@ambari.apache.org
Subject: Permanent changes to Ganglia config (how to)

Hi everyone,

I was trying to change the Ganglia sampling rate (let's say from 15 to 5 
seconds). I made some changes on the ambari-server node in the file 
/etc/ganglia/hdp/gmetad.conf but, when I restart the Ganglia service in Ambari, 
it is reverted to the original one.
Then I modified the file /usr/libexec/hdp/ganglia/gmetadLib.sh, which looks 
like the generator of the gmetad.conf, but even this file is then reverted to 
the original one.

How can I make a permanent change to the gmetad.conf? Will this changes be 
broadcasted to all the other nodes?

Thanks a lot

Fabio


Re: Java version upgrade

2015-02-16 Thread Dmitry Sen
Hi Giovanni,


Ambari supports customized JDK :


# ambari-server setup

Using python  /usr/bin/python2.6
Setup ambari-server
Checking SELinux...
SELinux status is 'disabled'
Ambari-server daemon is configured to run under user 'root'. Change this 
setting [y/n] (n)?
Adjusting ambari-server permissions and ownership...
Checking iptables...
Checking JDK...
Do you want to change Oracle JDK [y/n] (n)? y
[1] Oracle JDK 1.7
[2] Oracle JDK 1.6
[3] - Custom JDK
==
Enter choice (1): 3
WARNING: JDK must be installed on all hosts and JAVA_HOME must be valid on all 
hosts.
WARNING: JCE Policy files are required for configuring Kerberos security. If 
you plan to use Kerberos,please make sure JCE Unlimited Strength Jurisdiction 
Policy Files are valid on all hosts.
Path to JAVA_HOME: /my/path/to/jdk

...



Thanks,


BR,

Dmytro Sen



From: Giovanni Paolo Gibilisco gibb...@gmail.com
Sent: Monday, February 16, 2015 11:22 AM
To: user@ambari.apache.org
Subject: Java version upgrade

Hi,
I'm trying to upgrade the version of Java used in the cluster in order to 
support Java 8. I've managed to install Java 8 and set the JAVA_HOME correctly 
on all the nodes on the cluster. I've restarted the services using ambari and 
even restarted the ambari server and agents but still wehn i subit a job using 
yarn i get an exception in my code saying
Exception in thread main java.lang.UnsupportedClassVersionError: 
it/polimi/tssotn/dataloader/DataLoader : Unsupported major.minor version 52.0
that basically means it is not running with Java 8.
Is there a way to tell ambari to confgure yarn (and all other services) so use 
the new jre?


Re: update nagios and ganglia

2015-01-12 Thread Dmitry Sen
Hi,

Ganglia and nagios can be upgraded as a part of general Ambari upgrade
process by executing

yum upgrade hdp_mon_nagios_addons hdp_mon_ganglia_addons

More datails you can find at
https://ambari.apache.org/1.2.3/installing-hadoop-using-ambari/content/ambari-chap7.html


P.S. Actual package names depend on your Ambari version



On Sat, Jan 10, 2015 at 4:08 AM, johny casanova pcgamer2...@outlook.com
wrote:

 helo,

 are there instructions on how to update nagios and ganglia on ambari ?




-- 
BR,
Dmitry Sen

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Stop all components API call no longer seems to work

2014-11-07 Thread Dmitry Sen
Hi, Greg!

I think you should specify operation level in the body, like


   1. {RequestInfo:{context:Stop
   
Services,operation_level:{level:CLUSTER,cluster_name:testcluster}},Body:{ServiceInfo:{state:INSTALLED}}}:


BR,
Dmytro

On Fri, Nov 7, 2014 at 3:55 PM, Greg Hill greg.h...@rackspace.com wrote:

  This used to work in earlier 1.7.0 builds, but doesn't seem to any
 longer:

  PUT /api/v1/clusters/testcluster/hosts/
 c6404.ambari.apache.org/host_components
  {RequestInfo: {context: Stop All Components}, Body: {HostRoles:
 {state: INSTALLED}}}

  Seeing this in the server logs:
 13:05:42,082  WARN [qtp1842914725-24] AmbariManagementControllerImpl:2149
 - Can not determine request operation level. Operation level property
 should be specified for this request.
  13:05:42,082  INFO [qtp1842914725-24]
 AmbariManagementControllerImpl:2162 - Received a updateHostComponent
 request, clusterName=testcluster, serviceName=HDFS, componentName=DATANODE,
 hostname=c6404.ambari.apache.org, request={ clusterName=testcluster,
 serviceName=HDFS, componentName=DATANODE, hostname=c6404.ambari.apache.org,
 desiredState=INSTALLED, desiredStackId=null, staleConfig=null,
 adminState=null}
 13:05:42,083  INFO [qtp1842914725-24] AmbariManagementControllerImpl:2162
 - Received a updateHostComponent request, clusterName=testcluster,
 serviceName=GANGLIA, componentName=GANGLIA_MONITOR, hostname=
 c6404.ambari.apache.org, request={ clusterName=testcluster,
 serviceName=GANGLIA, componentName=GANGLIA_MONITOR, hostname=
 c6404.ambari.apache.org, desiredState=INSTALLED, desiredStackId=null,
 staleConfig=null, adminState=null}
 13:05:42,083  INFO [qtp1842914725-24] AmbariManagementControllerImpl:2162
 - Received a updateHostComponent request, clusterName=testcluster,
 serviceName=YARN, componentName=NODEMANAGER, hostname=
 c6404.ambari.apache.org, request={ clusterName=testcluster,
 serviceName=YARN, componentName=NODEMANAGER, hostname=
 c6404.ambari.apache.org, desiredState=INSTALLED, desiredStackId=null,
 staleConfig=null, adminState=null}

  But I get an empty response with status 200 and no request was created.
 Shouldn't that be an error if it can't act on my request?

  Are there some docs about how to formulate the 'operation level' part of
 the request?

  Greg




-- 
BR,
Dmitry Sen

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Plugins Support to extend Apache Ambari Functionality

2014-07-07 Thread Dmitry Sen
Hi,

Apache Ambari supports custom plug-in UI capabilities. I think that's what
you're looking for https://cwiki.apache.org/confluence/display/AMBARI/Views




On Mon, Jul 7, 2014 at 12:08 PM, ÐΞ€ρ@Ҝ (๏̯͡๏) deepuj...@gmail.com wrote:

 +1
 Kafka cluster or druid cluster or custom java processes


 On Mon, Jul 7, 2014 at 12:46 PM, Suraj Nayak M snay...@gmail.com wrote:

  Hi Everyone,

 Does Apache Ambari support custom plugin development, by which one can
 extend Ambari functionality ? Are there any custom plugins already out
 there ?

 Custom functionality might be following :

- Install, Monitor, Start, Stop and Restart of Custom Java processes
(Example : Kafka OR In-House Java Frameworks or tools which run on
distributed mode)
- Creating custom UI to monitor the above added Custom Java Processes.

 --
 Thanks  Regards
 Suraj Nayak M





 --
 Deepak




-- 
BR,
Dmitry Sen

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Re: Is ambari agent(v1.6.0) able to install component in parallel?

2014-07-03 Thread Dmitry Sen
Hi,

Ambari agent uses default package manager to install packages for
components (yum for RedHat, CentOS and zypper for SLES). yum or zypper
can't be run in parallel in a single node, they create locks once they
started. But you will have hadoop components installing in parallel on
different nodes.

BR,
Dmytro Sen


On Thu, Jul 3, 2014 at 11:50 AM, sagi zhpeng...@gmail.com wrote:

 Hi ambari users,

 I am trying to deploy hadoop cluster by ambari, and  I want it to as fast
 as possible.

 When I digged into ambari agent source code, I found a parameter in
 ActionQueue.py named MAX_CONCURRENT_ACTIONS = 5, but I did not find any
 place invoke it.

 So I wonder is amabri agent able to install component in parallel?
 If not, is there any other method to accelerate the whole installation
 process?

 Thanks in advance for any suggestions.

 -
 Best Regards




-- 
BR,
Dmitry Sen

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.