Hi Alejandro,

We use Ambari 2.1.2.

Blueprint I use to create cluster is pointed to stack "HDP", version "2.3". 
Definition for that stack viewed via Ambari GUI (i.e. 
http://[AmbariServerNode]:8080/#/main/admin/stack/services) has Spark v. 
"1.4.1.2.3" in it.

Since we use some additional services in that cluster that are only tested for 
certain versions of Spark, we need to make sure what's in stack definition is 
exactly what is being installed. I did get around this problem by modifying 
base URL, but I'm not sure how stack definition should be viewed then, if this 
info can be changed somewhere else (in that metainfo.xml file you mentioned) 
without this being reflected in Ambari GUI.

Perhaps in that blueprint we should specify stack release version as well? I.e. 
instead of "2.3", use "2.3.2" (so it doesn't automatically assume latest 
release version, 2.3.4 in that case)?

Thank you very much for your answer - we wanted to understand how this is 
supposed to work, perhaps this is deliberate, but I've used stack definition so 
far to verify services versions to be installed.


Regards,

Michal


________________________________
From: Alejandro Fernandez <[email protected]>
Sent: Monday, January 4, 2016 7:13 PM
To: [email protected]; Sumit Mohanty; Michal Siemaszko
Subject: Re: Services' versions defined in stack ignored when provisioning 
cluster via blueprint

Hi Michal,

Which version of Ambari are you on?

In AMBARI-13830<https://issues.apache.org/jira/browse/AMBARI-13830> (on trunk), 
Spark's metainfo.xml file contains,
          <name>SPARK</name>
          <version>1.5.2.2.3</version>
          <extends>common-services/SPARK/1.4.1.2.3</extends>

It may very well be possible that the value is changed depending on which 
version of HDP 2.3 you install.

Thanks,
Alejandro

From: Michal Siemaszko 
<[email protected]<mailto:[email protected]>>
Reply-To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Date: Monday, January 4, 2016 at 8:34 AM
To: "[email protected]<mailto:[email protected]>" 
<[email protected]<mailto:[email protected]>>
Subject: Services' versions defined in stack ignored when provisioning cluster 
via blueprint


Hi,

For a project where creation of Hadoop clusters needs to be fully automated, I 
utilized Ambari's blueprints and auto-discovery features, in addition to 
bootstraping hosts via REST instead of GUI (so the manual host registration is 
not necessary anymore).

I run into an issue where versions of services defined in stack used are 
ignored during cluster provisioning. For example, I specify "stack_name" as 
"HDP" and "stack_version" as "2.3" in blueprint I use, and "Spark" version 
associated with that stack is "1.4.1.2.3". However, once cluster is 
provisioned, even though 
http://[AmbariServerNode]:8080/#/main/admin/stack/services also shows "Spark" 
version "1.4.1.2.3" installed, when submitting a sample Spark job via Spark 
shell on client node, "Spark" version "1.5.2" is shown; 
`/usr/hdp/2.3.4.0-3485/spark/RELEASE` file on that node also shows "Spark" 
version "1.5.2.2.3.4.0".

I got around this issue by changing the base URL for repository used to 
'http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.3.2.0' (instead 
of "http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.3.4.0";) 
prior to provisioning cluster from blueprint.

I'm wondering whether this is something expected or if it's a bug, i.e. that 
service version defined in stack is ignored unless such change is made. 
Shouldn't services' versions defined in stack be respected and applied 
regardless of base URL? Perhaps I'm missing some other settings that should be 
applied in blueprint or cluster creation template, but AFAIK it's all as per 
docs/examples I read.

Thank you for your input.

Regards,
Michal Siemaszko

Reply via email to