That seems reasonable to make an exception for coprocessors on binary compatibility. Can this be explicitly documented if it's not already so folks are sure to know that?

    James

On 04/04/2013 05:53 PM, Andrew Purtell wrote:
Thanks for taking the time for the thoughtful feedback James.

- between 0.94.3 and 0.94.4, new methods were introduced on
RegionScanner. Often coprocessors will have their own implementation of
these so that they can aggregate in the postScannerOpen. Though this broke
binary compat, it improved scan performance as well. Where does the binary
compatible line stop?
- between 0.94.3 and 0.94.4, the class loading changed for coprocessors.
If a coprocessor was on the classpath, it didn't matter what the jar path
was, it would load. In 0.94.4, that was no longer the case - if the jar
path was invalid, the coprocessor would no longer load. Though this broke
compatibility, it was a good cleanup for the class loading logic. Call it a
bug fix or a change in behavior, but either way it was an incompatible
change. Is a change in behavior that causes incompatibilities still meet
the binary compatible criteria?

As "owner" of this space, I will claim that no binary compatibility nor
even interface compatibility should be expected at this time even among
point releases. This is for two reasons, the first being most important:

     1) The Coprocessor API caters to only a few users currently, so feels
the pull of the gravity of these major (new) users, e.g. security, Francis
Liu's work, Phoenix. We can expect this to lessen naturally over time as --
exactly here -- users like Phoenix become satisfied with the status quo and
begin to push back. Your input going forward will be valuable.

     2) Coprocessors are an extension surface for internals



On Thu, Apr 4, 2013 at 4:29 PM, James Taylor <jtay...@salesforce.com> wrote:

The binary compat is a slippery slope. It'd be a bummer if we couldn't
take advantage of all the innovation you guys are doing. At the same time,
it's tough to require the Phoenix user community, for example, to upgrade
their HBase servers to be able to move to the latest version of Phoenix. I
don't know what the right answer is, but here are a couple of concrete
cases:
- between 0.94.3 and 0.94.4, new methods were introduced on RegionScanner.
Often coprocessors will have their own implementation of these so that they
can aggregate in the postScannerOpen. Though this broke binary compat, it
improved scan performance as well. Where does the binary compatible line
stop?
- between 0.94.3 and 0.94.4, the class loading changed for coprocessors.
If a coprocessor was on the classpath, it didn't matter what the jar path
was, it would load. In 0.94.4, that was no longer the case - if the jar
path was invalid, the coprocessor would no longer load. Though this broke
compatibility, it was a good cleanup for the class loading logic. Call it a
bug fix or a change in behavior, but either way it was an incompatible
change. Is a change in behavior that causes incompatibilities still meet
the binary compatible criteria?
- between 0.94.4 and 0.94.5, the essential column family feature was
introduced. This is an example of one that is binary compatible. We're able
to take advantage of the feature and maintain binary compatibility with
0.94.4 (in which case the feature simple wouldn't be available).

Maybe if we just explicitly identified compatibility issues, that would be
a good start? We'd likely need a way to find them, though.

     James

On 04/04/2013 03:59 PM, lars hofhansl wrote:

I agree we need both, but I'm afraid that ship has sailed.
It's not something we paid a lot of attention to especially being
forward-binary-compatible. I would guess that there will be many more of
these issues.

Also, we have to qualify this statement somewhere. If you extend
HRegionServer you cannot expect compatibility between releases. Of course
that is silly, but it serves the point I am making.

For client visible classes (such as in this case) we should make it work,
we identifies issues with Filters and Coprocessors in the past and kept
them binary compatible on a best effort basis.


TL;DR: Let's fix this issue, and be wary of more such issues.


-- Lars



______________________________**__
   From: Andrew Purtell <apurt...@apache.org>
To: "dev@hbase.apache.org" <dev@hbase.apache.org>
Sent: Thursday, April 4, 2013 3:21 PM
Subject: Re: Does compatibility between versions also mean binary
compatibility?
   "Compatible" implies both to my understanding of the term, unless
qualified.

I don't think we should qualify it. This looks like a regression to me.


On Thu, Apr 4, 2013 at 1:20 PM, Jean-Daniel Cryans <jdcry...@apache.org
wrote:
  tl;dr should two compatible versions be considered both wire and
binary compatible or just the former?

Hey devs,

0.92 is compatible with 0.94, meaning that you can run a client for
either against the other and you can roll restart from 0.92 to 0.94.

What about binary compatibility? Meaning, can you run user code
compiled against 0.92 with 0.94's jars?

Unfortunately, the answer is "no" in this case if you invoke setters
on HColumnDescriptor as you'll get:

java.lang.NoSuchMethodError:
org.apache.hadoop.hbase.**HColumnDescriptor.**setMaxVersions(I)V

HBASE-5357 "Use builder pattern in HColumnDescriptor" changed the
method signatures by changing "void" to "HColumnDescriptor" so it' not
the same methods anymore.

I don't think we really had talks about binary compatibility before so
this is why I'm raising it up now.

Should "compatible" versions be just wire compatible or both wire and
binary compatible? The latter means we need new tests. I think it
should be both.

What do you guys think?

J-D





Reply via email to