Re: life after Hortonworks

2020-05-14 Thread jeremy montgomery
@aaron - i don't think that the server/agent piece needs to be messed
with.  It is more puppet than ansible, but it works pretty well.  It has a
few issues that need to be worked out, but nothing insurmountable.

Agent/Server (pieces that need work)

   - Ability to invalidate the agent cache.
   - Multithreading - Currently alerts/restarts work on a concept of
   threads and timeouts.  You can end up with too many concurrent tasks/alerts
   and things get backlogged waiting on the timeouts.  However, if you up the
   threads you run into a lot of race conditions with auto-restart and alerts
   firing concurrently. > main piece here is it needs a prioritized
   queuing system locally.
   - Order of restarts and hierarchy - Right now this is hard coded at the
   server level within a component and somewhat for full restarts based on the
   order on the left hand bar.
  - Need the ability to modify the full restart sequence
  - Need the ability to state whether a restart is blocking, blocking
  only when component is on the same server, and non-blocking.
  - Need the ability to chain 2 different components together.
  Example, if i restart a kerberized hiveserver2 i can have it
restart hue as
  well after it is restarted.  Otherwise hue will error out since
the spnego
  hashes are different.

I'll make some edits to the confluence page explaining how the
stacks/mpacks/extensions work.

On Tue, May 12, 2020 at 10:51 AM Matt Andruff 
wrote:

> @Aaron Bossert - For that type of change (Ansible) you probably need to
> write up a design paper. (I have seen this done, and I'm not sure it's
> required but seems to be common practice)  Sounds like ansible would be a
> good fit.
>
> @Jeremy - thanks for the insights, I might ping you if I get stuck trying
> to get into it.  I certainly wish there was a clean standard for
> installation locations.
>
>
>
> On Tue, May 12, 2020 at 10:36 AM Aaron Bossert 
> wrote:
>
>> Jeremy,
>>
>>
>>
>> I would be happy to tackle this with you and whoever else is willing…that
>> being said, I have always had a love/hate relationship with Ambari…the
>> stacks have always been intimidating to me….you mention revisiting the
>> architecture….do you have any ideas for what might be better?  In my
>> opinion, I have always wanted to replace the guts of ambari with a system
>> that is underpinned by Ansible…seems much more user-friendly and MUCH
>> easier to configure and extend for those who are not comfortable with doing
>> their own Java to Python and creating RPM’s and the like….What are your
>> thoughts?
>>
>>
>>
>>
>>
>> -- M. Aaron Bossert
>>
>>
>>
>> *From: *jeremy montgomery 
>> *Reply-To: *"user@ambari.apache.org" 
>> *Date: *Tuesday, May 12, 2020 at 9:14 AM
>> *To: *"user@ambari.apache.org" 
>> *Subject: *Re: life after Hortonworks
>>
>>
>>
>> I make mpacks all the time, but tend to push them as extensions because
>> the stack architecture is byzantine.
>>
>>- It is java that deploys python code.  As such, the structure
>>behaves like java, which makes it a pain in the rear to track down simple
>>python functions.  Sometimes the code ends up in common-services, 
>> sometimes
>>it stays in the current version of stacks.  Some stack components only use
>>code in their latest version, some have code files strung through the
>>folders of 8 previous versions and 3 stacks (looking at you LLAP).
>>- The install code doesn't know where it wants to be.  sometimes its
>>in the stack, sometimes it is in ambari.  This means that a version of
>>ambari tends to be hard coded to 1-2 versions of a stack.
>>- Changing javascript for a stack isn't possible.  This means that
>>stack components are hard coded into the ember.js with a bunch of if
>>statements.
>>- Making additions to a stack feature is a major process (like adding
>>hbase thrift to an existing hbase installation)
>>- Upgrades require a yum file regardless of installation method.  so
>>learning how to create dummy rpms is necessary.
>>
>> I'd be willing to chip in and maintain the stack python code but the
>> stack architecture really needs to be revisited.
>>
>>
>>
>> On Mon, May 11, 2020 at 1:50 PM Aaron Bossert 
>> wrote:
>>
>> Matt,
>>
>>
>>
>> Yeah, my thought was to start with whatever the most recent HDP/HDF stack
>> definition as a starting point.  It just so happens that I have a
>> backburner project to do this already.  I have been using Horto

Re: life after Hortonworks

2020-05-12 Thread jeremy montgomery
I make mpacks all the time, but tend to push them as extensions because the
stack architecture is byzantine.

   - It is java that deploys python code.  As such, the structure behaves
   like java, which makes it a pain in the rear to track down simple python
   functions.  Sometimes the code ends up in common-services, sometimes it
   stays in the current version of stacks.  Some stack components only use
   code in their latest version, some have code files strung through the
   folders of 8 previous versions and 3 stacks (looking at you LLAP).
   - The install code doesn't know where it wants to be.  sometimes its in
   the stack, sometimes it is in ambari.  This means that a version of ambari
   tends to be hard coded to 1-2 versions of a stack.
   - Changing javascript for a stack isn't possible.  This means that stack
   components are hard coded into the ember.js with a bunch of if statements.
   - Making additions to a stack feature is a major process (like adding
   hbase thrift to an existing hbase installation)
   - Upgrades require a yum file regardless of installation method.  so
   learning how to create dummy rpms is necessary.

I'd be willing to chip in and maintain the stack python code but the stack
architecture really needs to be revisited.

On Mon, May 11, 2020 at 1:50 PM Aaron Bossert  wrote:

> Matt,
>
>
>
> Yeah, my thought was to start with whatever the most recent HDP/HDF stack
> definition as a starting point.  It just so happens that I have a
> backburner project to do this already.  I have been using Hortonworks for a
> long time, but have found recently that I needed to install newer versions
> of Apache Druid and Apache Storm, which would require me to do a new
> stack…Full disclosure:  I have NEVER mucked with stacks and am not a Python
> guy…I write in Scala/Java predominantly….that being said, I would be happy
> to collaborate on this if anyone feels that this would be worthwhile and
> useful to the broader community.
>
>
>
> -- M. Aaron Bossert
>
>
>
> *From: *Matt Andruff 
> *Reply-To: *"user@ambari.apache.org" 
> *Date: *Monday, May 11, 2020 at 1:46 PM
> *To: *"user@ambari.apache.org" 
> *Subject: *Re: life after Hortonworks
>
>
>
> Cloudera no longer uses ambari. They stuck with Cloudera manager in their
> release of CDP.(CDH+HDP=CDP)
>
>
>
> https://docs.cloudera.com/cdpdc/7.0/overview/topics/cdpdc-overview.html
> 
>
>
>
> I don't think this means that Ambari is dead.  I do think it means as
> stated that the community will need to take on packaging a stack and
> building rpms.(or at least packaging the stack)
>
>
>
> The legacy code for stack of HDP 2.6 stream is out there already in the
> repo so it's just some work to create rpms of whatever Ambari wants to
> release.
>
>
>
> I assume the stacks section is so poorly documented is because Hortonworks
> was doing the work of packaging.  I'm not sure the level of effort needed
> to make a stack work but it seems like we could start with the last HDP
> build (3.1.4) and keep moving forward.
>
>
>
>
>
> On Mon., May 11, 2020, 13:08 Stephen Boesch,  wrote:
>
> I am reading between the lines that ambari is no longer a strategic
> platform. Would someone please provide a link/reference to a Cloudera press
> release or blog describing this and maybe related decisions/roadmaps?  thx!
>
>
>
> Am Mo., 11. Mai 2020 um 10:05 Uhr schrieb Aaron Bossert <
> aa...@punchcyber.com>:
>
> For what it is worth, I have written blueprints before, but never stacks.
> The documentation and tutorials for ambari stacks and blueprints are
> horribly out of date, incomplete, or flat out missing.  Perhaps that could
> be an initial task for the community to undertake so that those of us who
> were using the Hortonworks suite of tools and were comfortable with Ambari
> can sever the cord, as it wererelying on commercial companies to
> support open source tools once their objectives have changed is rarely a
> good thing.
>
>
>
> Get Outlook for iOS
> 
> --
>
> *From:* Ganesh Raju 
> *Sent:* Monday, May 11, 2020 12:57:50 PM
> *To:* user@ambari.apache.org 
> *Subject:* Re: life after Hortonworks
>
>
>
> Here is more Apache Bigtop info
>
>
>
> releases
> 

Re: How to set custom field as type=password In cluster template or blueprint?

2019-02-11 Thread jeremy montgomery
"kafka-broker": {
"properties_attributes": {
"password": {
"ssl.truststore.password": "true",
"ssl.keystore.password": "true",
"ssl.key.password": "true"
}
},
"properties": {

One of the things I don't like about the blueprint generators is they don't
pull out the properties_attributes sections.

On Mon, Feb 11, 2019 at 7:22 AM Sean Roberts  wrote:

> Ambari experts - We have a custom field (not defined in stack definitions)
> that contains a password. If we specify it during cluster creation it will
> show in the Ambari's configs.
>
> *How do we, at cluster creation time, set the field to "Type=PASSWORD"?*
>
> This can be done manually once a cluster is already deployed, however I'm
> not seeing how to do it in the cluster creation template or blueprint.
>
> --
> Sean Roberts
>


Re: Conditions in Quicklinks checks

2019-01-11 Thread jeremy montgomery
It needs to be extended beyond that.  You have situations where you have
load balancers, knox urls, knox vanity urls for multiple knox servers, etc.

On Fri, Jan 11, 2019 at 3:34 AM Sean Roberts  wrote:

> Ambari experts - Within `quicklinks.json`, is it possible to have an OR
> condition?
>
> For example:
> set protocol="https" if
>   (desired=HTTPS_ONLY
> OR desired=HTTP_AND_HTTPS)
>
> ```
> {
>   "name": "default",
>   "description": "default quick links configuration",
>   "configuration": {
> "protocol":
> {
>   "type":"https",
>   "checks":[
> {
>   "property":"dfs.http.policy",
>   "desired":"HTTPS_ONLY",
>   "site":"hdfs-site"
> }
>   ]
> },
> ```
>
> --
> Sean Roberts
>


Modifying a Stack with an Extension

2018-06-25 Thread jeremy montgomery
So the following has to do with adding the HBase Thrift and Rest Server to
Ambari.  Personally, I like using extensions because its isolates the
functions from the upgrade process.  However, it doesn't look like an
extension can extend either a stack or a common-service.

Stacks => HDP 2.6, BigInsights 4.2.5

Goals:
Be able to add Thrift/Rest with configs
Add appropriate alerts
Recompile app.js to have them appear in the summary panel

QUESTION 1 => is it possible to extend this with an extension?  Like Extend
HDP 2.3 instead of common-services?

QUESTION 2 => Is it possible to trigger a recompile of app.js with a flag?

<<
First Try
Extension Pack

  2.0
  

  HBASE
  common-services/HBASE/0.96.0.2.0
###thrift and rest components

  


Result
Not able to add Thrift/Rest Server

Second Try:
Modifying HDP 2.3 Stack HBase metainfo.xml to add them:

Result:
able to add Thrift/Rest server with configs
add appropriate alerts
Only Rest shows up in the summary panel since REST came with BigInsights
4.0 (but was lost when it merged with HDP 2.3) after digging, this seems to
be because you need a stack change or extension to recompile app.js to
include the new information.


There also appears to be an inheritance problem when stacks operate on the
same common-service.

BigInsights 4.0 adds the HBase Rest Server, but this isn't anywhere in the
HDP Stack so it doesn't show up as a possibility.  However, if you add a
reference to the HDP 2.3 metainfo.xml, it will pick up all of the
BigInsights code.