Re: install HDP3.0 using ambari

2018-07-24 Thread Benoit Perroud
HDP 3 don’t have any more spark (1.x), only spark2.

In general, old blueprints are not fully compatible and have to be tweaked a 
bit.

I see two options from where you are:

1) Upgrade your current blueprint, i.e. use it with HDP 2.6+, run the upgrade 
wizard from Ambari 2.7 to HDP 3, and export a new version of the blueprint.
2) Manually update the blueprint and remove the spark-defaults section it has. 
This is still not giving you the guarantee the blueprint will work, you might 
need to do more customisation.

Benoit




> On 25 Jul 2018, at 00:05, Lian Jiang  wrote:
> 
> Thanks Benoit for the advice.
> 
> I switched to ambari 2.7. However, when I create the cluster, it failed due 
> to "config types are not defined in the stack: [spark-defaults]".
> 
> Below links point to a spec < ambari2.7.
> https://cwiki.apache.org/confluence/display/AMBARI/Blueprints#Blueprints-BlueprintStructure
>  
> <https://cwiki.apache.org/confluence/display/AMBARI/Blueprints#Blueprints-BlueprintStructure>
> https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.0.0/administering-ambari/content/amb_using_ambari_blueprints.html
>  
> <https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.0.0/administering-ambari/content/amb_using_ambari_blueprints.html>
> 
> https://github.com/apache/ambari/tree/release-2.7.0/ambari-server/src/main/resources/stacks/HDP
>  
> <https://github.com/apache/ambari/tree/release-2.7.0/ambari-server/src/main/resources/stacks/HDP>
>  does not have HDP3.0. This makes it hard to troubleshoot.
> 
> Do you know where I can find the source code of HDP3.0 ambari stack so that I 
> can check what configs are supported in new ambari?
> 
> Thanks.
> 
> 
> 
> On Mon, Jul 23, 2018 at 2:35 PM, Benoit Perroud  <mailto:ben...@noisette.ch>> wrote:
> Are you using Ambari 2.7?
> 
> Make sure you upgrade Ambari to 2.7 first, since this version is required for 
> HDP 3
> 
> Benoit
> 
> 
>> On 23 Jul 2018, at 23:32, Lian Jiang > <mailto:jiangok2...@gmail.com>> wrote:
>> 
>> Hi,
>> 
>> I am using ambari blueprint to install HDP 3.0 and cannot register the vdf 
>> file.
>> 
>> The vdf file is (the url works):
>> 
>> {
>>   "VersionDefinition": {
>>  "version_url": 
>> "http://public-repo-1.hortonworks.com/HDP/centos7/3.x/updates/3.0.0.0/HDP-3.0.0.0-1634.xml
>>  
>> <http://public-repo-1.hortonworks.com/HDP/centos7/3.x/updates/3.0.0.0/HDP-3.0.0.0-1634.xml>"
>>   }
>> }
>> 
>> The error is "An internal system exception occurred: Stack data, Stack HDP 
>> 3.0 is not found in Ambari metainfo"
>> 
>> Any idea? Thanks.
> 
> 



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: install HDP3.0 using ambari

2018-07-23 Thread Benoit Perroud
Are you using Ambari 2.7?

Make sure you upgrade Ambari to 2.7 first, since this version is required for 
HDP 3

Benoit


> On 23 Jul 2018, at 23:32, Lian Jiang  wrote:
> 
> Hi,
> 
> I am using ambari blueprint to install HDP 3.0 and cannot register the vdf 
> file.
> 
> The vdf file is (the url works):
> 
> {
>   "VersionDefinition": {
>  "version_url": 
> "http://public-repo-1.hortonworks.com/HDP/centos7/3.x/updates/3.0.0.0/HDP-3.0.0.0-1634.xml
>  
> "
>   }
> }
> 
> The error is "An internal system exception occurred: Stack data, Stack HDP 
> 3.0 is not found in Ambari metainfo"
> 
> Any idea? Thanks.



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: Hive Error on restart when public IP is changed

2018-02-09 Thread Benoit Perroud
I can be configured on hostname if your internal DNS resolution works.




> On 09 Feb 2018, at 12:37, Satyanarayana Jampa  wrote:
> 
> Yes, changing the public DNS to local hostname/IP works. I would like to know 
> if this can be configured to local hostname(FQDN) during installation itself, 
> so that it need not be changed manually on every restart of the AWS server or 
> whenever the public IP is changed.
> 
> Thanks,
> Satya.
> From: Benoit Perroud [mailto:ben...@noisette.ch]
> Sent: 09 February 2018 15:32
> To: user@ambari.apache.org
> Subject: Re: Hive Error on restart when public IP is changed
> 
> The ip listed in the exception is the instance private ip.
> 
> I would change
> 
> > hadoop.proxyuser.hive.hosts: ec2-54-197-36-23.compute-1.amazonaws.com 
> > <http://ec2-54-197-36-23.compute-1.amazonaws.com/>
> 
> to
> 
> > hadoop.proxyuser.hive.hosts: 172.31.55.219
> 
> If this still doesn’t work, remove the IP and put * instead.
> 
> Small warning here, I would not open Hive to the whole world and rely only on 
> host filtering thinking it’s secure.
> 
> 
> 
> 
> 
> 
> On 09 Feb 2018, at 09:59, Satyanarayana Jampa  <mailto:sja...@innominds.com>> wrote:
> 
> Hi,
> 
> The below error is observed after restarting the single node AWS machine.
> 
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
>  Unauthorized connection for super-user: hive from IP 172.31.55.219
> at org.apache.hadoop.ipc.Client.call(Client.java:1427)
> at org.apache.hadoop.ipc.Client.call(Client.java:1358)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy15.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
> 
> Scenario:
> 1.   Install the HDP on a single node AWS box.
> 2.   We can see the below configuration after installation:
> a.services->HDFS->configs->advanced->custom core-site
>i.  
> hadoop.proxyuser.hive.hosts: ec2-54-197-36-23.compute-1.amazonaws.com 
> <http://ec2-54-197-36-23.compute-1.amazonaws.com/>
>   ii.  
> hadoop.proxyuser.hive.groups: *
> 3.   Once we restart the AWS machine, the public IP of the machine 
> changes and as such the  public DNS name which was picked up automatically 
> during installation for “hadoop.proxyuser.hive.hosts” property becomes 
> invalid and hence the error.
> 
> Can someone please let me know how to overcome this situation.
> 
> Thanks,
> Satya.



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: Hive Error on restart when public IP is changed

2018-02-09 Thread Benoit Perroud
The ip listed in the exception is the instance private ip.

I would change

> hadoop.proxyuser.hive.hosts: ec2-54-197-36-23.compute-1.amazonaws.com 
> 

to

> hadoop.proxyuser.hive.hosts: 172.31.55.219

If this still doesn’t work, remove the IP and put * instead.

Small warning here, I would not open Hive to the whole world and rely only on 
host filtering thinking it’s secure.






> On 09 Feb 2018, at 09:59, Satyanarayana Jampa  wrote:
> 
> Hi,
> 
> The below error is observed after restarting the single node AWS machine.
> 
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
>  Unauthorized connection for super-user: hive from IP 172.31.55.219
> at org.apache.hadoop.ipc.Client.call(Client.java:1427)
> at org.apache.hadoop.ipc.Client.call(Client.java:1358)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
> at com.sun.proxy.$Proxy15.getFileInfo(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
> 
> Scenario:
> 1.   Install the HDP on a single node AWS box.
> 2.   We can see the below configuration after installation:
> a.services->HDFS->configs->advanced->custom core-site
>i.  
> hadoop.proxyuser.hive.hosts: ec2-54-197-36-23.compute-1.amazonaws.com 
> 
>   ii.  
> hadoop.proxyuser.hive.groups: *
> 3.   Once we restart the AWS machine, the public IP of the machine 
> changes and as such the  public DNS name which was picked up automatically 
> during installation for “hadoop.proxyuser.hive.hosts” property becomes 
> invalid and hence the error.
> 
> Can someone please let me know how to overcome this situation.
> 
> Thanks,
> Satya.



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: 2.2.1.1 and Kafka widget

2016-04-12 Thread Benoit Perroud
Cool, thanks!

Benoit

Sent from my iPhone

> On 13 Apr 2016, at 07:30, Dmitry Sen  wrote:
> 
> They will re-appear after upgrade to Ambari 2.2.2 and you'll be able to add 
> any customized widgets. I'm not sure if it can be fixed without upgrade.
> 
> From: Perroud Benoit 
> Sent: Tuesday, April 12, 2016 11:15 PM
> To: user@ambari.apache.org
> Subject: Re: 2.2.1.1 and Kafka widget
>  
> Thanks for the quick answer. I’m wondering if with the fix, the widget will 
> re-appear or if it is easy got re-add them.
> 
> Thanks
> 
> Benoit
> 
> 
> 
>> On Apr 12, 2016, at 4:16 PM, Dmitry Sen  wrote:
>> 
>> Hi Benoid,
>> 
>> That's a known bug, it's solved in 
>> https://issues.apache.org/jira/browse/AMBARI-15759 for Ambari 2.2.2
>> ​
>> 
>> From: Benoit Perroud 
>> Sent: Tuesday, April 12, 2016 4:46 PM
>> To: user@ambari.apache.org
>> Subject: 2.2.1.1 and Kafka widget
>>  
>> Hi All,
>> 
>> 2.2.1.1 release notes mention 
>> https://issues.apache.org/jira/browse/AMBARI-14941 which is supposed to 
>> enhance Kafka and Storm widget.
>> 
>> I did deploy 2.2.1.1 and the Kafka widgets disappeared, but the Create 
>> Widget button is not working.
>> 
>> In the dev consol, I got the following error:
>> 
>> Uncaught TypeError: Cannot read property 'mapProperty' of 
>> undefinedmodule.exports.Em.Route.extend.createServiceWidget @ 
>> app.js:80158Ember.StateManager.Ember.State.extend.sendRecursively @ 
>> vendor.js:17348Ember.StateManager.Ember.State.extend.sendRecursively @ 
>> vendor.js:17352Ember.StateManager.Ember.State.extend.sendRecursively @ 
>> vendor.js:17352Ember.StateManager.Ember.State.extend.sendRecursively @ 
>> vendor.js:17352Ember.StateManager.Ember.State.extend.send @ 
>> vendor.js:17333App.MainServiceInfoSummaryController.Em.Controller.extend.createWidget
>>  @ app.js:24925App.MainServiceInfoSummaryView.Em.View.extend.doWidgetAction 
>> @ app.js:198659ActionHelper.registeredActions.(anonymous function).handler @ 
>> vendor.js:21227(anonymous function) @ vendor.js:13019f.event.dispatch @ 
>> vendor.js:126h.handle.i @ vendor.js:126
>> 
>> Did someone manage to create a widget for Kafka?
>> 
>> Thanks
>> 
>> Benoit
> 


2.2.1.1 and Kafka widget

2016-04-12 Thread Benoit Perroud
Hi All,

2.2.1.1 release notes mention
https://issues.apache.org/jira/browse/AMBARI-14941 which is supposed to
enhance Kafka and Storm widget.

I did deploy 2.2.1.1 and the Kafka widgets disappeared, but the Create
Widget button is not working.

In the dev consol, I got the following error:

Uncaught TypeError: Cannot read property 'mapProperty' of
undefinedmodule.exports.Em.Route.extend.createServiceWidget @
app.js:80158Ember.StateManager.Ember.State.extend.sendRecursively @
vendor.js:17348Ember.StateManager.Ember.State.extend.sendRecursively @
vendor.js:17352Ember.StateManager.Ember.State.extend.sendRecursively @
vendor.js:17352Ember.StateManager.Ember.State.extend.sendRecursively @
vendor.js:17352Ember.StateManager.Ember.State.extend.send @
vendor.js:17333App.MainServiceInfoSummaryController.Em.Controller.extend.createWidget
@ app.js:24925App.MainServiceInfoSummaryView.Em.View.extend.doWidgetAction
@ app.js:198659ActionHelper.registeredActions.(anonymous function).handler
@ vendor.js:21227(anonymous function) @ vendor.js:13019f.event.dispatch @
vendor.js:126h.handle.i @ vendor.js:126

Did someone manage to create a widget for Kafka?

Thanks

Benoit


Re: Hadoop Backup and Archival Cluster

2016-02-10 Thread Benoit Perroud
We're using Trumpet (http://verisign.github.io/trumpet/), a iNotify-like
for HDFS, as the fondation of such replication inter-cluster replication.
In a nutshell, every new fiels created in Cluster A does notify a
replication system, which copy the file to cluster B (see
https://github.com/verisign/trumpet/blob/master/examples/src/main/java/com/verisign/vscc/hdfs/trumpet/client/example/TestApp.java
for
an example)
For keeping Hive partitions in sync,
https://github.com/daplab/hive-auto-partitioner should make it (also relies
on Trumpet).

Benoit

On Wed, Feb 10, 2016 at 7:37 PM David Whitmore <
david.whitm...@catalinamarketing.com> wrote:

> Vivek,
>
>
>
> You are correct, distcp will overwrite a file if it has changed or is new.
>
> As to running this realtime (ie: as soon as data is deposited on the
> source cluster, you will have to handle that).
>
> Please be aware if you are talking about hive tables, you will also need
> the hive metastore.
>
> We copy our critical data from a Production Cluster to another Production
> Cluster and to a Test Cluster on a daily basis.
>
> Also, the contents of the Hive Metastore database.
>
> Be aware if you restore the Hive Metastore database on the destination
> cluster, any tables created solely on the destination cluster may disappear.
>
>
>
> David
>
>
>
>
>
> *From:* Vivek Singh Raghuwanshi [mailto:vivekraghuwan...@gmail.com]
> *Sent:* Wednesday, February 10, 2016 1:28 PM
> *To:* user@ambari.apache.org
> *Subject:* Re: Hadoop Backup and Archival Cluster
>
>
>
> Thanks David,
>
>
>
> I want to replicate the data once it reached on the cluster, and delete
> from source Cluster after one year. I want Cluster works as Hot Backup and
> Archival and Cluster A only having latest data.
>
>
>
> And as per my information distcp copy all the data and over-right. Please
> correct me if i am wrong.
>
>
>
>
>
> On Wed, Feb 10, 2016 at 12:21 PM, David Whitmore <
> david.whitm...@catalinamarketing.com> wrote:
>
> Yes, you can run a distcp to copy data from one cluster to another, also
> distcp has an option to tell if it will delete files on the destination if
> they are NOT on the source.
>
>
>
>
>
> *From:* Vivek Singh Raghuwanshi [mailto:vivekraghuwan...@gmail.com]
> *Sent:* Wednesday, February 10, 2016 1:16 PM
> *To:* user@ambari.apache.org
> *Subject:* Hadoop Backup and Archival Cluster
>
>
>
>
> Hi Friends,
>
>
>
> I am planning to setup a Hadoop Cluster (A) with Cluster replication (B).
> so that once data is reached to Cluster A it will replicated to Cluster D.
> I am having one question if i delete data from Cluster A on the basis of
> Time like one month old data is it also removed from Cluster B. if yes how
> i can avoid this.
>
> What i want to achieve.
>
> 1. Once data is reached to Cluster A it will automatically replicated to
> Cluster B.
>
> 2. After one year old data from Cluster A remove automatically but not
> from Cluster B.
>
> 3. If any one wants to run query on latest data Cluster A is available but
> for Older data Cluster B is available.
>
>
>
>
>
> Regards
>
> --
>
> ViVek Raghuwanshi
> Mobile -+91-09595950504
> Skype - vivek_raghuwanshi
> IRC vivekraghuwanshi
> http://vivekraghuwanshi.wordpress.com/
> http://in.linkedin.com/in/vivekraghuwanshi
>
>
>
>
>
> --
>
> ViVek Raghuwanshi
> Mobile -+91-09595950504
> Skype - vivek_raghuwanshi
> IRC vivekraghuwanshi
> http://vivekraghuwanshi.wordpress.com/
> http://in.linkedin.com/in/vivekraghuwanshi
>


Custom Alert definition

2015-12-30 Thread Benoit Perroud
Hi All,

I'm struggling with the following use case:

I want to add a custom alert which should call a custom script monitoring
some Pig jobs.

My alert definition currently looks like this:

  "AlertDefinition" : {
"cluster_name" : "MyCluster",
"component_name" : null,
"description" : "My description",
"enabled" : true,
"ignore_host" : true,
"interval" : 5,
"label" : "My_Label",
"name" : "my_name",
"scope" : "ANY",
"service_name" : "PIG",
"source" : {
  "parameters" : [
{
  "name" : "connection.timeout1",
  "display_name" : "Connection Timeout1",
  "units" : "seconds",
  "value" : 10.0,
  "description" : "The maximum time before this alert is considered
to be CRITICAL",
  "type" : "NUMERIC",
  "threshold" : "CRITICAL"
}
  ],
  "path" : "/usr/local/bin/pig_check.sh",
  "type" : "SCRIPT"
}
  }

My alert is added properly, i.e. I'm able to
query api/v1/clusters/DAPLAB02/alert_definitions/${alert_id}, but I don't
see my script called at all, i mean, the state in Ambari is NONE, and I
don't see any alert (api/v1/clusters/DAPLAB02/alerts) with defintion_id =
${alert_id}.

Also, is there some more detailed documentation about the properties such as
- ignore_host
- what is the difference between name and description?
- can we add general checks not related to any service?
- with the METRIC source, can we monitor metrics reported in the Ambari
Collector instead of JMX?

Thanks in advance,

Benoit.


Re: Any way to reset Ambari Install Wizard?

2015-07-28 Thread Benoit Perroud
Some manual update in DB is most likely needed.

*WARNING* use this at your own risk

The table that needs to be updated is cluster_version.

As far as I tested 2.1, it required less manual intervention than 2.0.1.
Upgrade has a retry button for most of the steps, and this is really cool.

Hope this help.

Benoit



2015-07-28 20:01 GMT+02:00 Ken Barclay :

>  Hello,
>
>  I upgraded a small test cluster from HDP 2.1 to HDP 2.2 and Ambari
> 2.0.1. In following the steps to replace Nagios + Ganglia with the Ambari
> Metrics System using the Ambari Wizard, an install failure occurred on one
> node due to an outdated glibc library. I updated glibc and verified the
> Metrics packages could be installed, but couldn’t go back and finish the
> installation through the wizard. The problem is: it flags some of the
> default settings, saying they need to be changed, but it skips past the
> screen very quickly that enables those settings to be changed, without
> allowing anything to be entered. So the button that allows you to proceed
> with the installation never becomes enabled.
>
>  I subsequently manually finished the Metrics installation using the
> Ambari API and have it running in Distributed mode. But Ambari’s wizard
> cannot be used for anything now: the same problem described above occurs
> for every service I try to install.
>
>  Can Ambari be reset somehow in this situation, or do I need to reinstall
> it?
> Or do you recommend installing 2.1?
>
>  Thanks
> Ken
>


Re: HA NameNode switching without reason

2015-07-24 Thread Benoit Perroud
This is probably not the best place to ask such questions as they are not
specifically related to Ambari but HDFS.

There are lots of scenarios when a NN can switch, and there is always a
good reason for that :)

Some of them can be:
- if you're running a older version of hadoop/hdp (2.1), slow block reports
or slow fsimage transfer can lead to NN switch,
- you rpc pool is too small (NN server thread)
- you're hit by the futex lock bug (
https://groups.google.com/forum/#!topic/mechanical-sympathy/QbmpZxp6C64)
and might to upgrade your kernel(s).





2015-07-23 17:51 GMT+02:00 Loïc Chanel :

> Hi,
>
> I am using a high-availability cluster with 3 JournalNodes and 2 NameNodes
> on 2 out of 3 of these hosts, and the NameNode switched his host 3 times in
> less than 24 hours without apparent reason.
>
> This can't be a network problem, as the logs indicate clearly that the
> NameNode can't send logs to the JournalNode running on the exact same host,
> while calling it using its IP, and this doesn't seem to be a CPU or RAM
> problem as the command sar does not return any abnormality, and Ganglia
> graphics show that the JVM has way more memory than it needs to have.
>
> Do any of you have an idea about where the problem might come from ?
> Thanks in advance,
>
>
> Loïc
>
> Loïc CHANEL
> Engineering student at TELECOM Nancy
> Trainee at Worldline - Villeurbanne
>


Re: Change hostname on running cluster

2015-04-20 Thread Benoit Perroud
I did it earlier this year with Ambari 1.7 and postgresql. As noticed
earlier, no master processes was running on the nodes I renamed.
Provided without guarantees :
http://www.swiss-scalability.com/2015/01/rename-host-in-ambari-170.html



2015-04-19 13:22 GMT+02:00 Frank Eisenhauer :

> Seems like renaming a host is more complex than expected, especially in my
> case, where I have to rename the hosts with the master services like the
> namenode, hbase master etc.
>
> I'm currently considering to install the test lab from scratch and migrate
> the "old" lab to the newly installed one.
>
>
>
>
>
> On Sunday, April 19, 2015, Alejandro Fernandez 
> wrote:
>
>>  Renaming a host also has implications because NameNode and other
>> services can still have references to the old name in several configs.
>> If the host has no master components, then it's definitely easier.
>>
>>  The following steps can be used to change host name in the Ambari
>> database, so please use at your own risk.
>>  1. Stop all services through Ambari UI.
>>
>>  2. Stop ambari-server on Ambari server host from command line, run:
>> > ambari-server stop
>>
>>  3. Stop ambari-agent on all of the hosts from command line, run:
>> > ambari-agent stop
>> This is done to make sure that operation safer.
>>
>>  4. Backup database on the Ambari server from command line, please run:
>> > mkdir /var/db_backup
>> > cd /var/db_backup
>>
>>  *Postgres*
>> > pg_dump -U ambari ambari > ambari1.sql (default password: bigdata)
>> > pg_dump -U mapred ambarirca > ambarirca1.sql (default password: mapred)
>>
>>  *MySQL*
>> In Linux, the MySQL databases are stored in /var/lib/mysql by default.
>> > mysqldump -u ambari ambari > ambari1.sql (default password: bigdata)
>>
>>  5. Replace all occurrences of old hostname with new hostname.
>> For example:
>> > sed 's/old-hostname/new-hostname/g' ambari1.sql > ambari2.sql
>> > sed 's/old-hostname/new-hostname/g' ambarirca1.sql > ambarirca2.sql
>> (not needed when using MySQL)
>> Warning: Use tools that are appropriate for you. Please be careful to not
>> accidentally replace unintended strings in the database data.
>>
>>  6. Clean up Ambari database from command line, please run:
>> > ambari-server reset
>>
>>  7. Recreate Ambari database from command line, please run:
>> *Postgres*
>> > su postgres -c 'psql -c "drop database ambari" '
>> > su postgres -c 'psql -c "drop database ambarirca" '
>> > su postgres -c 'psql -c "create database ambari" '
>> > su postgres -c 'psql -c "create database ambarirca" '
>>
>>  *MySQL*
>> > mysql -u ambari -pbigdata  (or other password)
>> mysql> DROP DATABASE ambari;
>> mysql> CREATE DATABASE ambari;
>> mysql> exit;
>>
>>  8. Load into Ambari database with the modified database generated in
>> step 5 and run the following commands:
>> *Postgres*
>> > su postgres -c 'psql -f ambari2.sql -d ambari'
>> > su postgres -c 'psql -f ambarirca2.sql -d ambarirca'
>>
>>  *MySQL*
>> > mysql -u ambari -pbigdata ambari < ambari2.sql (default password:
>> bigdata)
>>
>>  9. Start ambari-server on Ambari server host from command line, please
>> run:
>> > ambari-server start
>>
>>  10. Start ambari-agent on all of the hosts from command line, please
>> run:
>> > ambari-agent start
>>
>>  11. Verify that Ambari shows new host name.
>>
>>  12.Restart all services through Ambari UI, this will cause all
>> components to pick up any changes in configs.
>> Note: This procedure does not modify any database that is used by Hadoop
>> components such as Hive metastore database or Oozie database. If database
>> access were granted to old hosts, you may need to grant permissions to the
>> new host before starting the Hadoop service.
>>
>>  Thanks,
>> Alejandro
>>
>>  On 4/18/15, 1:16 PM, "Yusaku Sako"  wrote:
>>
>>  Just FYI...
>> What I've seen folks do is dump the database, keep a backup, replace all
>> occurrences of the old hostname to the new hostname in the dump file, then
>> reimport.
>>
>>  Yusaku
>>
>>  On 4/18/15 9:51 AM, "Sumit Mohanty"  wrote:
>>
>>  +Alejandro
>>
>>  In theory, you can stop ambari-server, modify all occurrences of the
>> hostname and that should be it. There is not first class support for it.
>>
>>  Alejandro, did you look at the possibility of manually changing all host
>> names to rename a host
>> (https://issues.apache.org/jira/browse/AMBARI-10167)
>>
>>  -Sumit
>> 
>> From: Frank Eisenhauer 
>> Sent: Saturday, April 18, 2015 12:31 AM
>> To: Ambari User
>> Subject: Change hostname on running cluster
>>
>>  Hi All,
>>
>>  we have a running hadoop cluster where we unfortunately have a hostname
>> in uppercase, e.g. SRV-HADOOP01.BIGDATA.LOCAL.
>>
>>  As of Ambari 1.7 we are experiencing a lot of side effects which are
>> presumably caused by the hostnames in uppercase.
>>
>>  I would like to rename the particular hosts(e.g.
>> srv-hadoop01.bigdata.local), so that there are only hosts with lowercase
>> names in the cluster.
>>
>>  Is it possible

Re: cannot override hdfs_log_dir_prefix

2015-03-15 Thread Benoit Perroud
You can always symlink to the final destination



Le samedi 14 mars 2015, Brian Jeltema  a
écrit :

> I need to override the value of hdfs_log_dir_prefix in an Ambari
> configuration,
> but the configs UI has the field greyed out (advanced: Hadoop Log Dir
> Prefix).
> How do I change this value?
>
> Thanks
> Brian


Re: unable to locate package ambari-server

2015-02-15 Thread Benoit Perroud
If you plan to install HDP on 14.04, you might find useful to get some
inspiration from the info I captured to be able to do that, but keep in
mind this is a big hack and obviously not supported by Hortonworks nor the
community:
http://www.swiss-scalability.com/2014/12/install-hdp-22-on-ubuntu-1404-trusty.html

Also, as a side note as we're talking about Ubuntu, MRv1 does not work at
all with Ubuntu (there're plenty of if `rpm -qa | grep hadoop1` in the code
to check if MRv1 is enabled, which obviously does not work with Ubuntu.

Benoit


2015-02-15 20:19 GMT+01:00 Sean Roberts :

>  B - You had likely missed the ‘apt-get update’ after putting the file in
> place. Note that the current packages are for Ubuntu12 not Ubuntu 14, so
> it’s not guaranteed to work.
>
>  --
> Hortonworks - We do Hadoop
>
>  Sean Roberts
> Partner Solutions Engineer - EMEA
> @seano
>
> From: Adaryl Bob Wakefield, MBA 
> 
> Reply: user@ambari.apache.org >
> 
> Date: February 15, 2015 at 19:17:25
> To: user@ambari.apache.org >
> 
>
> Subject:  Re: unable to locate package ambari-server
>
>I think there was. I was working on Ubuntu 12.04. After I upgraded to
> 14.04 (after a bunch of other problems) I was able to pull it down using
> apt-get.
> B.
>
>  *From:* Sean Roberts 
> *Sent:* Sunday, February 15, 2015 5:36 AM
> *To:* user@ambari.apache.org
> *Subject:* Re: unable to locate package ambari-server
>
>
> B - Double check these:
>
>1. Please make sure the sources list is in the appropriate place (ls
>/etc/apt/sources.list.d/ambari.list)
>2. The content of that file matches the content of http://public-repo–
>1.hortonworks.com/ambari/ubuntu12/1.x/updates/1.7.0/ambari.list (cat
>/etc/apt/sources.list.d/ambari.list)
>3. You’ve updated the apt-sources list: sudo apt-get update
>
> If after confirming all of that, there would have to be something wrong
> with your Ubuntu dpkg/apt database.
>   --
> Hortonworks - We do Hadoop
>
> Sean Roberts
> Partner Solutions Engineer - EMEA
> @seano
>
> From: Adaryl Bob Wakefield, MBA mailto:adaryl.wakefi...@hotmail.com
> 
> Reply: user@ambari.apache.org mailto:user@ambari.apache.org
> 
> Date: February 15, 2015 at 10:49:51
> To: user@ambari.apache.org mailto:user@ambari.apache.org
> 
> Subject: unable to locate package ambari-server
>
>I’m following the directions put out with the Hortonworks data
> platform. I’m following the directions to the letter and yet when I run:
> apt-get install ambari-server
>
> I get:
> unable to locate package ambari-server
>
> I’m dead stopped here and not sure what to do. An apt-cache pkgnames shows
> that there are no Ambari packages in the list.
>
> B.
>
>


Re: Advice on building Ambari from source?

2015-02-02 Thread Benoit Perroud
I built the branch 1.7 without any problems


Le vendredi 30 janvier 2015, Hellmar Becker  a
écrit :

> Hi all,
>
> I have been trying to build Ambari from the newest sources following the
> instructions in the Ambari wiki but I seem to be failing in the tests for
> ambari-server.
>
> Here is a snippet from my build log:
>
> 
>
> Results :
>
> Failed tests:   
> testRequestURL(org.apache.ambari.server.view.HttpImpersonatorImplTest):
> expected:<[Dummy text from HTTP response]> but was:<[]>
>   
> testRequestURLWithCustom(org.apache.ambari.server.view.HttpImpersonatorImplTest):
> expected:<[Dummy text from HTTP response]> but was:<[]>
>   
> testHeartbeatStateCommandsEnqueueing(org.apache.ambari.server.agent.TestHeartbeatMonitor):
> HeartbeatMonitor should be already stopped
>
> Tests in error:
>   testDeadlockBetweenImplementations(org.apache.ambari.server.
> state.cluster.ClusterDeadlockTest): test timed out after 3
> milliseconds
>   testUpdateRepoUrlController(org.apache.ambari.server.controller.
> AmbariManagementControllerTest): Could not access base url .
> http://public-repo-1.hortonworks.com/HDP-1.1.1.16/repos/centos5 .
> java.net.UnknownHostException: public-repo-1.hortonworks.com
>
> Tests run: 2628, Failures: 3, Errors: 2, Skipped: 15
>
> 
>
> This is on CentOS 6.5, using the trunk branch. Or is there a defined
> branch that I should check out for a successful build?
>
> Regards,
> Hellmar
>
> 
> Hellmar Becker
> Edmond Audranstraat 55
> NL-3543BG Utrecht
> mail: bec...@hellmar-becker.de
> mobile: +31 6 29986670
> 
>
>


Re: Managing an existing cluster

2015-01-27 Thread Benoit Perroud
Not really.

What you can do though is replace few components at the time, like starting
with zookeeper, HDFS, etc and reduce the downtime.

CDH5.3 and HDP 2.2 /should/ (no guarantee here) be protocol compatible with
HDFS, which means you could be able to stop HDFS from CDH, start with the
same fsimage and datanode datadir the HDP version and run finalize metadata
upgrade, saving lots of migration time (i.e. copying all the data from
CDH:hftp to HDP:hdfs, requiring double capacity).

Benoit


2015-01-27 19:31 GMT+01:00 Daniel Haviv :

> Hi,
> We have a cluster that was manually installed using CDH rpms that we want
> to manage using Ambari.
> Is it possible to somehow manage an existing running cluster?
>
> Thanks,
> Daniel
>
>


Re: Move name node wizard failing

2015-01-12 Thread Benoit Perroud
Otherwise look in the ambari-agent logs of the node where the NameNode is
disabled (/var/log/ambari-agent)

You might also want to turn on debugging in all these daemons, restart them
and retry to move the NN.
agents: /etc/ambari-agent/conf/ambari-agent.ini, loglevel=DEBUG
server: /etc/ambari-server/conf/log4j.properties,
log4j.rootLogger=DEBUG,file




2015-01-12 20:46 GMT+01:00 Benbenek, Waldyn J :

>  I did look at those and saw nothing about the name node move or about
> disabling.
>
>
>
> On a related note.  How does one enable the name node once it has been
> disabled.  According to the UI the name node was, in fact, disabled even
> though it reports a failure.  I can’t get it enabled again.
>
>
>
> Wally
>
>
>
> *From:* Benoit Perroud [mailto:ben...@noisette.ch]
> *Sent:* Monday, January 12, 2015 12:08 PM
> *To:* user@ambari.apache.org
> *Subject:* Re: Move name node wizard failing
>
>
>
> You should give a look at the server logs in
>
>
>
> /var/logs/ambari-server
>
>
>
>
>
> Le lundi 12 janvier 2015, Benbenek, Waldyn J 
> a écrit :
>
>  I am trying to move the name node to a new host I have added
> successfully.  The “move name node” wizard gets to step three, “Disable
> original name node,” and fails with no more explanation.  Continuous
> retries also fail.  Where do I go to see the some sort of explanation as to
> what is going wrong?
>
>
>
> Thanks,
>
>
>
> Wally Benbenek
>
>


Re: Move name node wizard failing

2015-01-12 Thread Benoit Perroud
You should give a look at the server logs in

/var/logs/ambari-server



Le lundi 12 janvier 2015, Benbenek, Waldyn J  a
écrit :

>  I am trying to move the name node to a new host I have added
> successfully.  The “move name node” wizard gets to step three, “Disable
> original name node,” and fails with no more explanation.  Continuous
> retries also fail.  Where do I go to see the some sort of explanation as to
> what is going wrong?
>
>
>
> Thanks,
>
>
>
> Wally Benbenek
>