Although we had the discussion, and some great ideas where passed around, I
do not believe we came to some kind of consensus on what 1.0 should look
like. So that discussion would have to be picked up again so that we could
know where we are at, and make it an actual thing if we were going to make
this a 1.0 release.

I believe that the issues raised in that discussion gating 1.0 are still
largely applicable, including upgrade.

Right now we have *ZERO* HDP 3.1 users. We will go from that to *only*
supporting 3.1 work and releases. So every user and deployment we currently
have will feel real pain, have to slay real dragons to move forward with
metron.

With regards to support for older versions, the “backward capability” that
has been mentioned, I would not say that I have any specific plan for that
in mind. What I would say rather, is that I believe that we must be
explicit, setting expectations correctly and clearly with regards to our
intent while demonstrating that we have thought through the situation. That
discussion has not happened, at least I do not believe that the prior dev
thread really handles it in context.

Depending on the upgrade situation for going to 3.1, it may be that a dual
stream of releases or fixes or new features to the extent that we can do it
will greatly reduce the pain for folks, or make it viable to stick with
metron until they can upgrade.

The issue of what metron *is* features wise may be another one we want to
take up at some point. The idea being can we separate the metron
_integration parts from the metron core functionality such that we can work
on them separately and thus support multiple platforms through
integrations/applications. Of course definition of metron’s value beyond
integration, and what those features and application boundaries are would
be necessary.




On August 26, 2019 at 18:52:57, Michael Miklavcic (
michael.miklav...@gmail.com) wrote:

Hi devs and users,

Some questions were asked in the Slack channel about our ongoing HDP/Hadoop
upgrade and I'd like to get a discussion rolling. The original Hadoop
upgrade discuss thread can be found here
https://lists.apache.org/thread.html/37cc29648f0592cc39d3c78a0d07fce38521bdbbc4cf40e022a7a8ea@%3Cdev.metron.apache.org%3E

The major issue and impact with upgrading the Hadoop platform is that there
are breaking changes. Code that runs on HDP 3.1 will not run on 2.x. Here
is a sampling of core components we depend on that we know of so far that
are not backwards compatible:

   1. The Core OS - we currently base our builds and test deployment off of
   artifacts pulled from HDP. The latest rev of HDP no longer ships RPMs for
   Centos 6, which means we need to upgrade to Centos 7
   2. Ambari
   3. HBase

This differs from individual components we've upgraded in the past in that
our code could still be deployed on the old as well as new version of the
component in a backwards compatible way. Based on semantic versioning, I
don't know if we can introduce this level of change in a point release,
which is the reason for kicking off this discussion. In the past, users and
developers in the community have suggested that they are -1 on a 1.x
release that does not provide an upgrade
https://lists.apache.org/thread.html/eb1a8df2d0a6a79c5d50540d1fdbf215ec83d831ff15d3117c2592cc@%3Cdev.metron.apache.org%3E
.

Is there a way we can avoid a 1.x release? If we do need 1.x, do we still
see upgrades as a gating function? The main issue is that this has the
potential to drag out the upgrade and further couple it with other
features. And with Storm 1.x being eol'ed, I'm not sure this is something
we can wait much longer for. I'll think on this and send out my own
thoughts once folks have had a chance to review.

Best,
Mike Miklavcic
Apache Metron, PMC, committer

Reply via email to