Build failed in Jenkins: Phoenix | Master #2140

2018-09-27 Thread Apache Jenkins Server
See 


Changes:

[tdsilva] PHOENIX-4930 Add test for ORDER BY and LIMIT queries during a split

--
[...truncated 1.24 MB...]
Downloading from central: 
https://repo.maven.apache.org/maven2/com/ibm/icu/icu4j-charset/60.2/icu4j-charset-60.2.jar
Progress (2): 1.6/1.9 MB | 1.1/1.4 MBProgress (2): 1.6/1.9 MB | 1.1/1.4 
MBProgress (2): 1.6/1.9 MB | 1.1/1.4 MBProgress (2): 1.6/1.9 MB | 1.1/1.4 
MBProgress (2): 1.6/1.9 MB | 1.1/1.4 MBProgress (2): 1.6/1.9 MB | 1.1/1.4 
MBProgress (2): 1.6/1.9 MB | 1.1/1.4 MBProgress (2): 1.6/1.9 MB | 1.1/1.4 
MBProgress (2): 1.6/1.9 MB | 1.1/1.4 MBProgress (2): 1.6/1.9 MB | 1.1/1.4 
MBProgress (2): 1.6/1.9 MB | 1.1/1.4 MB 
Downloaded from central: 
https://repo.maven.apache.org/maven2/com/ibm/icu/icu4j/60.2/icu4j-60.2.jar (0 B 
at 0 B/s)
Progress (2): 1.7/1.9 MB | 1.1/1.4 MB 
Downloading from central: 
https://repo.maven.apache.org/maven2/com/lmax/disruptor/3.3.6/disruptor-3.3.6.jar
Progress (2): 1.7/1.9 MB | 1.1/1.4 MBProgress (2): 1.7/1.9 MB | 1.1/1.4 
MBProgress (2): 1.7/1.9 MB | 1.1/1.4 MBProgress (2): 1.7/1.9 MB | 1.1/1.4 
MBProgress (2): 1.7/1.9 MB | 1.1/1.4 MBProgress (2): 1.7/1.9 MB | 1.1/1.4 
MBProgress (2): 1.7/1.9 MB | 1.1/1.4 MBProgress (2): 1.7/1.9 MB | 1.1/1.4 
MBProgress (2): 1.7/1.9 MB | 1.1/1.4 MBProgress (2): 1.7/1.9 MB | 1.1/1.4 
MBProgress (2): 1.7/1.9 MB | 1.1/1.4 MBProgress (2): 1.7/1.9 MB | 1.1/1.4 
MBProgress (2): 1.7/1.9 MB | 1.1/1.4 MBProgress (2): 1.7/1.9 MB | 1.1/1.4 
MBProgress (2): 1.7/1.9 MB | 1.1/1.4 MBProgress (2): 1.7/1.9 MB | 1.1/1.4 
MBProgress (2): 1.7/1.9 MB | 1.1/1.4 MBProgress (2): 1.7/1.9 MB | 1.1/1.4 
MBProgress (2): 1.7/1.9 MB | 1.1/1.4 MBProgress (2): 1.7/1.9 MB | 1.1/1.4 
MBProgress (2): 1.7/1.9 MB | 1.1/1.4 MBProgress (2): 1.7/1.9 MB | 1.1/1.4 
MBProgress (2): 1.7/1.9 MB | 1.1/1.4 MBProgress (2): 1.7/1.9 MB | 1.1/1.4 
MBProgress (2): 1.7/1.9 MB | 1.1/1.4 MBProgress (2): 1.7/1.9 MB | 1.1/1.4 
MBProgress (2): 1.7/1.9 MB | 1.1/1.4 MBProgress (2): 1.7/1.9 MB | 1.2/1.4 
MBProgress (2): 1.7/1.9 MB | 1.2/1.4 MB 
Downloaded from central: 
https://repo.maven.apache.org/maven2/com/ibm/icu/icu4j-localespi/60.2/icu4j-localespi-60.2.jar
 (0 B at 0 B/s)
Progress (2): 1.7/1.9 MB | 1.2/1.4 MBProgress (2): 1.7/1.9 MB | 1.2/1.4 
MBProgress (2): 1.7/1.9 MB | 1.2/1.4 MBProgress (2): 1.7/1.9 MB | 1.2/1.4 
MBProgress (2): 1.7/1.9 MB | 1.2/1.4 MBProgress (2): 1.7/1.9 MB | 1.2/1.4 MB
 Downloaded from central: 
https://repo.maven.apache.org/maven2/com/lmax/disruptor/3.3.6/disruptor-3.3.6.jar
 (0 B at 0 B/s)
Progress (2): 1.7/1.9 MB | 1.2/1.4 MBProgress (2): 1.7/1.9 MB | 1.2/1.4 
MBProgress (2): 1.7/1.9 MB | 1.2/1.4 MBProgress (2): 1.7/1.9 MB | 1.2/1.4 
MBProgress (2): 1.7/1.9 MB | 1.2/1.4 MBProgress (2): 1.7/1.9 MB | 1.2/1.4 
MBProgress (2): 1.7/1.9 MB | 1.2/1.4 MBProgress (2): 1.7/1.9 MB | 1.2/1.4 
MBProgress (2): 1.7/1.9 MB | 1.2/1.4 MBProgress (2): 1.7/1.9 MB | 1.2/1.4 
MBProgress (2): 1.7/1.9 MB | 1.2/1.4 MBProgress (2): 1.7/1.9 MB | 1.2/1.4 
MBProgress (2): 1.7/1.9 MB | 1.2/1.4 MBProgress (2): 1.7/1.9 MB | 1.2/1.4 
MBProgress (2): 1.7/1.9 MB | 1.2/1.4 MBProgress (2): 1.8/1.9 MB | 1.2/1.4 
MBProgress (2): 1.8/1.9 MB | 1.2/1.4 MBProgress (2): 1.8/1.9 MB | 1.2/1.4 
MBProgress (2): 1.8/1.9 MB | 1.2/1.4 MBProgress (2): 1.8/1.9 MB | 1.2/1.4 
MBProgress (2): 1.8/1.9 MB | 1.2/1.4 MBProgress (2): 1.8/1.9 MB | 1.2/1.4 
MBProgress (2): 1.8/1.9 MB | 1.2/1.4 MBProgress (2): 1.8/1.9 MB | 1.2/1.4 
MBProgress (2): 1.8/1.9 MB | 1.2/1.4 MBProgress (2): 1.8/1.9 MB | 1.2/1.4 
MBProgress (2): 1.8/1.9 MB | 1.2/1.4 MBProgress (2): 1.8/1.9 MB | 1.2/1.4 
MBProgress (2): 1.8/1.9 MB | 1.2/1.4 MBProgress (2): 1.8/1.9 MB | 1.2/1.4 
MBProgress (2): 1.8/1.9 MB | 1.2/1.4 MBProgress (2): 1.8/1.9 MB | 1.2/1.4 
MBProgress (2): 1.8/1.9 MB | 1.2/1.4 MBProgress (2): 1.8/1.9 MB | 1.2/1.4 
MBProgress (2): 1.8/1.9 MB | 1.2/1.4 MBProgress (2): 1.8/1.9 MB | 1.2/1.4 
MBProgress (2): 1.8/1.9 MB | 1.2/1.4 MBProgress (2): 1.8/1.9 MB | 1.2/1.4 
MBProgress (2): 1.8/1.9 MB | 1.2/1.4 MBProgress (2): 1.8/1.9 MB | 1.2/1.4 
MBProgress (2): 1.8/1.9 MB | 1.2/1.4 MBProgress (2): 1.8/1.9 MB | 1.3/1.4 
MBProgress (2): 1.8/1.9 MB | 1.3/1.4 MBProgress (2): 1.8/1.9 MB | 1.3/1.4 
MBProgress (2): 1.8/1.9 MB | 1.3/1.4 MBProgress (2): 1.8/1.9 MB | 1.3/1.4 
MBProgress (2): 1.8/1.9 MB | 1.3/1.4 MBProgress (2): 1.8/1.9 MB | 1.3/1.4 
MBProgress (2): 1.8/1.9 MB | 1.3/1.4 MBProgress (2): 1.8/1.9 MB | 1.3/1.4 
MBProgress (2): 1.8/1.9 MB | 1.3/1.4 MBProgress (2): 1.8/1.9 MB | 1.3/1.4 
MBProgress (2): 1.8/1.9 MB | 1.3/1.4 MBProgress (2): 1.8/1.9 MB | 1.3/1.4 
MBProgress (2): 1.8/1.9 MB | 1.3/1.4 MBProgress (2): 1.8/1.9 MB | 1.3/1.4 
MBProgress (2): 1.8/1.9 MB | 1.3/1.4 MBProgress (2): 1.8/1.9 MB | 1.3/1.4 
MBProgress (2): 1.8/1.9 MB | 1.3/1.4 MBProgress (2): 1.8/1.9 MB

Build failed in Jenkins: Phoenix-4.x-HBase-1.3 #213

2018-09-27 Thread Apache Jenkins Server
See 

--
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H34 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/phoenix.git # timeout=10
Fetching upstream changes from 
https://git-wip-us.apache.org/repos/asf/phoenix.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/phoenix.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://git-wip-us.apache.org/repos/asf/phoenix.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://git-wip-us.apache.org/repos/asf/phoenix.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read fe06cacc1a715f094fbf98e6d42bd9abc5053037
error: Could not read fe06cacc1a715f094fbf98e6d42bd9abc5053037
error: Could not read 0c3aa09fc7838dc61253c55817a7383e8142eecd
error: Could not read 8ae455fd46e59d94a3f6710c03549239aa5b59c3
error: Could not read 9125d203665168b728c7f10babb3f09566ca891d
error: Could not read bef8d7dfc5e9143443932aabf1ecd8176afa8ff1
error: Could not read 9390a91e906c2e46fdc0ee18315510aeb87f747d
error: Could not read 88b6f6178bc748d3dc2aea812bb7050bcf0ebd4f
remote: Counting objects: 49518, done.
remote: Compressing objects:   0% (1/24741)   remote: Compressing 
objects:   1% (248/24741)   remote: Compressing objects:   2% 
(495/24741)   remote: Compressing objects:   3% (743/24741)   
remote: Compressing objects:   4% (990/24741)   remote: Compressing 
objects:   5% (1238/24741)   remote: Compressing objects:   6% 
(1485/24741)   remote: Compressing objects:   7% (1732/24741)   
remote: Compressing objects:   8% (1980/24741)   remote: Compressing 
objects:   9% (2227/24741)   remote: Compressing objects:  10% 
(2475/24741)   remote: Compressing objects:  11% (2722/24741)   
remote: Compressing objects:  12% (2969/24741)   remote: Compressing 
objects:  13% (3217/24741)   remote: Compressing objects:  14% 
(3464/24741)   remote: Compressing objects:  15% (3712/24741)   
remote: Compressing objects:  16% (3959/24741)   remote: Compressing 
objects:  17% (4206/24741)   remote: Compressing objects:  18% 
(4454/24741)   remote: Compressing objects:  19% (4701/24741)   
remote: Compressing objects:  20% (4949/24741)   remote: Compressing 
objects:  21% (5196/24741)   remote: Compressing objects:  22% 
(5444/24741)   remote: Compressing objects:  23% (5691/24741)   
remote: Compressing objects:  24% (5938/24741)   remote: Compressing 
objects:  25% (6186/24741)   remote: Compressing objects:  26% 
(6433/24741)   remote: Compressing objects:  27% (6681/24741)   
remote: Compressing objects:  28% (6928/24741)   remote: Compressing 
objects:  29% (7175/24741)   remote: Compressing objects:  30% 
(7423/24741)   remote: Compressing objects:  31% (7670/24741)   
remote: Compressing objects:  32% (7918/24741)   remote: Compressing 
objects:  33% (8165/24741)   remote: Compressing objects:  34% 
(8412/24741)   remote: Compressing objects:  35% (8660/24741)   
remote: Compressing objects:  36% (8907/24741)   remote: Compressing 
objects:  37% (9155/24741)   remote: Compressing objects:  38% 
(9402/24741)   remote: Compressing objects:  39% (9649/24741)   
remote: Compressing objects:  40% (9897/24741)   remote: Compressing 
objects:  41% (10144/247

Build failed in Jenkins: Phoenix-4.x-HBase-1.2 #491

2018-09-27 Thread Apache Jenkins Server
See 


Changes:

[tdsilva] PHOENIX-4930 Add test for ORDER BY and LIMIT queries during a split

--
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H34 (ubuntu xenial) in workspace 

Cloning the remote Git repository
Cloning repository https://git-wip-us.apache.org/repos/asf/phoenix.git
 > git init  # 
 > timeout=10
Fetching upstream changes from 
https://git-wip-us.apache.org/repos/asf/phoenix.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/phoenix.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/phoenix.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/phoenix.git # timeout=10
Fetching upstream changes from 
https://git-wip-us.apache.org/repos/asf/phoenix.git
 > git fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/phoenix.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse origin/4.x-HBase-1.2^{commit} # timeout=10
Checking out Revision f4ebaff0df48e8801c11d1411f3eb205262c5c9a 
(origin/4.x-HBase-1.2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f f4ebaff0df48e8801c11d1411f3eb205262c5c9a
Commit message: "PHOENIX-4930 Add test for ORDER BY and LIMIT queries during a 
split"
 > git rev-list --no-walk bd5aa2d9c40c099bbe37eac7f1a38b96fd843cfc # timeout=10
No emails were triggered.
[EnvInject] - Executing scripts and injecting environment variables after the 
SCM step.
[EnvInject] - Injecting as environment variables the properties content 
MAVEN_OPTS=-Xmx3G

[EnvInject] - Variables injected successfully.
[Phoenix-4.x-HBase-1.2] $ /bin/bash -xe /tmp/jenkins7145121568403962649.sh
+ echo 'DELETING ~/.m2/repository/org/apache/htrace. See 
https://issues.apache.org/jira/browse/PHOENIX-1802'
DELETING ~/.m2/repository/org/apache/htrace. See 
https://issues.apache.org/jira/browse/PHOENIX-1802
+ echo 'CURRENT CONTENT:'
CURRENT CONTENT:
+ ls /home/jenkins/.m2/repository/org/apache/htrace
htrace
htrace-core
htrace-core4
htrace-zipkin
[Phoenix-4.x-HBase-1.2] $ /home/jenkins/tools/maven/latest3/bin/mvn -U clean 
install -Dcheckstyle.skip=true
[INFO] Scanning for projects...
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-core:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter. @ 
org.apache.phoenix:phoenix-core:[unknown-version], 
 
line 65, column 23
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-flume:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-kafka:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter. @ 
org.apache.phoenix:phoenix-kafka:[unknown-version], 
 
line 347, column 20
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-pig:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-queryserver-client:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-queryserver:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-pherf:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-spark:jar:4.14.0-HBase-1.2
[WARNING] Repo

phoenix git commit: PHOENIX-4930 Add test for ORDER BY and LIMIT queries during a split

2018-09-27 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.3 a9e886ccf -> 352cde5d1


PHOENIX-4930 Add test for ORDER BY and LIMIT queries during a split


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/352cde5d
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/352cde5d
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/352cde5d

Branch: refs/heads/4.x-HBase-1.3
Commit: 352cde5d1e2a6e6e635036e93631e5d56bf63289
Parents: a9e886c
Author: Thomas D'Silva 
Authored: Sat Sep 15 14:27:14 2018 -0700
Committer: Thomas D'Silva 
Committed: Thu Sep 27 21:12:36 2018 -0700

--
 .../org/apache/phoenix/end2end/SplitIT.java | 260 +++
 1 file changed, 260 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/352cde5d/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitIT.java
new file mode 100644
index 000..73cf1f0
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitIT.java
@@ -0,0 +1,260 @@
+package org.apache.phoenix.end2end;
+
+import com.google.common.collect.Maps;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.coprocessor.BaseRegionObserver;
+import org.apache.hadoop.hbase.coprocessor.ObserverContext;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.regionserver.InternalScanner;
+import org.apache.hadoop.hbase.regionserver.Region;
+import org.apache.phoenix.hbase.index.Indexer;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.StaleRegionBoundaryCacheException;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.TestUtil;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.sql.*;
+import java.util.List;
+import java.util.Map;
+
+import static org.junit.Assert.*;
+
+public class SplitIT extends BaseUniqueNamesOwnClusterIT {
+private static final String SPLIT_TABLE_NAME_PREFIX = "SPLIT_TABLE_";
+private static boolean tableWasSplitDuringScannerNext = false;
+private static byte[] splitPoint = null;
+
+@BeforeClass
+public static void doSetup() throws Exception {
+Map serverProps = Maps.newHashMapWithExpectedSize(1);
+serverProps.put("hbase.coprocessor.region.classes", 
TestRegionObserver.class.getName());
+serverProps.put(Indexer.CHECK_VERSION_CONF_KEY, "false");
+Map clientProps = Maps.newHashMapWithExpectedSize(3);
+clientProps.put(QueryServices.MUTATE_BATCH_SIZE_ATTRIB, 
Integer.toString(10));
+// read rows in batches 3 at time
+clientProps.put(QueryServices.SCAN_CACHE_SIZE_ATTRIB, 
Integer.toString(3));
+setUpTestDriver(new ReadOnlyProps(serverProps.entrySet().iterator()), 
new ReadOnlyProps(clientProps.entrySet().iterator()));
+}
+
+public static class TestRegionObserver extends BaseRegionObserver {
+
+@Override
+public boolean postScannerNext(final 
ObserverContext c,
+   final InternalScanner s, final 
List results, final int limit,
+   final boolean hasMore) throws 
IOException {
+Region region = c.getEnvironment().getRegion();
+String tableName = 
region.getRegionInfo().getTable().getNameAsString();
+if (tableName.startsWith(SPLIT_TABLE_NAME_PREFIX) && 
results.size()>1) {
+int pk = 
(Integer)PInteger.INSTANCE.toObject(results.get(0).getRow());
+// split when row 10 is read
+if (pk==10 && !tableWasSplitDuringScannerNext) {
+try {
+// split on the first row being scanned if splitPoint 
is null
+splitPoint = splitPoint!=null ? splitPoint : 
results.get(0).getRow();
+splitTable(splitPoint, tableName);
+tableWasSplitDuringScannerNext = true;
+}
+catch (SQLException e) {
+throw new IOException(e);
+}
+}
+}
+return hasMore;
+}
+
+}
+
+public static void splitTable(byte[] splitPoint, String tableName) throws 
SQLException, IOException {
+HBaseAdmin admin = driver.getConnectionQueryServices(getUrl(), 
TestUtil.TEST_PROPERTIES).getAdmin();
+int nRegions = admin.getTableRegions(tableName.getBytes()

phoenix git commit: PHOENIX-4930 Add test for ORDER BY and LIMIT queries during a split

2018-09-27 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/master 47e955e27 -> c7eeda03b


PHOENIX-4930 Add test for ORDER BY and LIMIT queries during a split


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/c7eeda03
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/c7eeda03
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/c7eeda03

Branch: refs/heads/master
Commit: c7eeda03b0da00a4479ecb40c439601a97e6201c
Parents: 47e955e
Author: Thomas D'Silva 
Authored: Sat Sep 15 14:27:14 2018 -0700
Committer: Thomas D'Silva 
Committed: Thu Sep 27 21:15:54 2018 -0700

--
 .../org/apache/phoenix/end2end/SplitIT.java | 260 +++
 1 file changed, 260 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/c7eeda03/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitIT.java
new file mode 100644
index 000..73cf1f0
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitIT.java
@@ -0,0 +1,260 @@
+package org.apache.phoenix.end2end;
+
+import com.google.common.collect.Maps;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.coprocessor.BaseRegionObserver;
+import org.apache.hadoop.hbase.coprocessor.ObserverContext;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.regionserver.InternalScanner;
+import org.apache.hadoop.hbase.regionserver.Region;
+import org.apache.phoenix.hbase.index.Indexer;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.StaleRegionBoundaryCacheException;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.TestUtil;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.sql.*;
+import java.util.List;
+import java.util.Map;
+
+import static org.junit.Assert.*;
+
+public class SplitIT extends BaseUniqueNamesOwnClusterIT {
+private static final String SPLIT_TABLE_NAME_PREFIX = "SPLIT_TABLE_";
+private static boolean tableWasSplitDuringScannerNext = false;
+private static byte[] splitPoint = null;
+
+@BeforeClass
+public static void doSetup() throws Exception {
+Map serverProps = Maps.newHashMapWithExpectedSize(1);
+serverProps.put("hbase.coprocessor.region.classes", 
TestRegionObserver.class.getName());
+serverProps.put(Indexer.CHECK_VERSION_CONF_KEY, "false");
+Map clientProps = Maps.newHashMapWithExpectedSize(3);
+clientProps.put(QueryServices.MUTATE_BATCH_SIZE_ATTRIB, 
Integer.toString(10));
+// read rows in batches 3 at time
+clientProps.put(QueryServices.SCAN_CACHE_SIZE_ATTRIB, 
Integer.toString(3));
+setUpTestDriver(new ReadOnlyProps(serverProps.entrySet().iterator()), 
new ReadOnlyProps(clientProps.entrySet().iterator()));
+}
+
+public static class TestRegionObserver extends BaseRegionObserver {
+
+@Override
+public boolean postScannerNext(final 
ObserverContext c,
+   final InternalScanner s, final 
List results, final int limit,
+   final boolean hasMore) throws 
IOException {
+Region region = c.getEnvironment().getRegion();
+String tableName = 
region.getRegionInfo().getTable().getNameAsString();
+if (tableName.startsWith(SPLIT_TABLE_NAME_PREFIX) && 
results.size()>1) {
+int pk = 
(Integer)PInteger.INSTANCE.toObject(results.get(0).getRow());
+// split when row 10 is read
+if (pk==10 && !tableWasSplitDuringScannerNext) {
+try {
+// split on the first row being scanned if splitPoint 
is null
+splitPoint = splitPoint!=null ? splitPoint : 
results.get(0).getRow();
+splitTable(splitPoint, tableName);
+tableWasSplitDuringScannerNext = true;
+}
+catch (SQLException e) {
+throw new IOException(e);
+}
+}
+}
+return hasMore;
+}
+
+}
+
+public static void splitTable(byte[] splitPoint, String tableName) throws 
SQLException, IOException {
+HBaseAdmin admin = driver.getConnectionQueryServices(getUrl(), 
TestUtil.TEST_PROPERTIES).getAdmin();
+int nRegions = admin.getTableRegions(tableName.getBytes()).size();
+   

phoenix git commit: PHOENIX-4930 Add test for ORDER BY and LIMIT queries during a split

2018-09-27 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.4 b0231dfc3 -> bb2e77db8


PHOENIX-4930 Add test for ORDER BY and LIMIT queries during a split


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/bb2e77db
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/bb2e77db
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/bb2e77db

Branch: refs/heads/4.x-HBase-1.4
Commit: bb2e77db8ec513cb74ed92c9afbb3f69725d1a40
Parents: b0231df
Author: Thomas D'Silva 
Authored: Sat Sep 15 14:27:14 2018 -0700
Committer: Thomas D'Silva 
Committed: Thu Sep 27 21:14:11 2018 -0700

--
 .../org/apache/phoenix/end2end/SplitIT.java | 260 +++
 1 file changed, 260 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/bb2e77db/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitIT.java
new file mode 100644
index 000..73cf1f0
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitIT.java
@@ -0,0 +1,260 @@
+package org.apache.phoenix.end2end;
+
+import com.google.common.collect.Maps;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.coprocessor.BaseRegionObserver;
+import org.apache.hadoop.hbase.coprocessor.ObserverContext;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.regionserver.InternalScanner;
+import org.apache.hadoop.hbase.regionserver.Region;
+import org.apache.phoenix.hbase.index.Indexer;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.StaleRegionBoundaryCacheException;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.TestUtil;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.sql.*;
+import java.util.List;
+import java.util.Map;
+
+import static org.junit.Assert.*;
+
+public class SplitIT extends BaseUniqueNamesOwnClusterIT {
+private static final String SPLIT_TABLE_NAME_PREFIX = "SPLIT_TABLE_";
+private static boolean tableWasSplitDuringScannerNext = false;
+private static byte[] splitPoint = null;
+
+@BeforeClass
+public static void doSetup() throws Exception {
+Map serverProps = Maps.newHashMapWithExpectedSize(1);
+serverProps.put("hbase.coprocessor.region.classes", 
TestRegionObserver.class.getName());
+serverProps.put(Indexer.CHECK_VERSION_CONF_KEY, "false");
+Map clientProps = Maps.newHashMapWithExpectedSize(3);
+clientProps.put(QueryServices.MUTATE_BATCH_SIZE_ATTRIB, 
Integer.toString(10));
+// read rows in batches 3 at time
+clientProps.put(QueryServices.SCAN_CACHE_SIZE_ATTRIB, 
Integer.toString(3));
+setUpTestDriver(new ReadOnlyProps(serverProps.entrySet().iterator()), 
new ReadOnlyProps(clientProps.entrySet().iterator()));
+}
+
+public static class TestRegionObserver extends BaseRegionObserver {
+
+@Override
+public boolean postScannerNext(final 
ObserverContext c,
+   final InternalScanner s, final 
List results, final int limit,
+   final boolean hasMore) throws 
IOException {
+Region region = c.getEnvironment().getRegion();
+String tableName = 
region.getRegionInfo().getTable().getNameAsString();
+if (tableName.startsWith(SPLIT_TABLE_NAME_PREFIX) && 
results.size()>1) {
+int pk = 
(Integer)PInteger.INSTANCE.toObject(results.get(0).getRow());
+// split when row 10 is read
+if (pk==10 && !tableWasSplitDuringScannerNext) {
+try {
+// split on the first row being scanned if splitPoint 
is null
+splitPoint = splitPoint!=null ? splitPoint : 
results.get(0).getRow();
+splitTable(splitPoint, tableName);
+tableWasSplitDuringScannerNext = true;
+}
+catch (SQLException e) {
+throw new IOException(e);
+}
+}
+}
+return hasMore;
+}
+
+}
+
+public static void splitTable(byte[] splitPoint, String tableName) throws 
SQLException, IOException {
+HBaseAdmin admin = driver.getConnectionQueryServices(getUrl(), 
TestUtil.TEST_PROPERTIES).getAdmin();
+int nRegions = admin.getTableRegions(tableName.getBytes()

phoenix git commit: PHOENIX-4930 Add test for ORDER BY and LIMIT queries during a split

2018-09-27 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 bd5aa2d9c -> f4ebaff0d


PHOENIX-4930 Add test for ORDER BY and LIMIT queries during a split


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/f4ebaff0
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/f4ebaff0
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/f4ebaff0

Branch: refs/heads/4.x-HBase-1.2
Commit: f4ebaff0df48e8801c11d1411f3eb205262c5c9a
Parents: bd5aa2d
Author: Thomas D'Silva 
Authored: Sat Sep 15 14:27:14 2018 -0700
Committer: Thomas D'Silva 
Committed: Thu Sep 27 21:13:34 2018 -0700

--
 .../org/apache/phoenix/end2end/SplitIT.java | 260 +++
 1 file changed, 260 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/f4ebaff0/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitIT.java
new file mode 100644
index 000..73cf1f0
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitIT.java
@@ -0,0 +1,260 @@
+package org.apache.phoenix.end2end;
+
+import com.google.common.collect.Maps;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.coprocessor.BaseRegionObserver;
+import org.apache.hadoop.hbase.coprocessor.ObserverContext;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.regionserver.InternalScanner;
+import org.apache.hadoop.hbase.regionserver.Region;
+import org.apache.phoenix.hbase.index.Indexer;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.StaleRegionBoundaryCacheException;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.TestUtil;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.io.IOException;
+import java.sql.*;
+import java.util.List;
+import java.util.Map;
+
+import static org.junit.Assert.*;
+
+public class SplitIT extends BaseUniqueNamesOwnClusterIT {
+private static final String SPLIT_TABLE_NAME_PREFIX = "SPLIT_TABLE_";
+private static boolean tableWasSplitDuringScannerNext = false;
+private static byte[] splitPoint = null;
+
+@BeforeClass
+public static void doSetup() throws Exception {
+Map serverProps = Maps.newHashMapWithExpectedSize(1);
+serverProps.put("hbase.coprocessor.region.classes", 
TestRegionObserver.class.getName());
+serverProps.put(Indexer.CHECK_VERSION_CONF_KEY, "false");
+Map clientProps = Maps.newHashMapWithExpectedSize(3);
+clientProps.put(QueryServices.MUTATE_BATCH_SIZE_ATTRIB, 
Integer.toString(10));
+// read rows in batches 3 at time
+clientProps.put(QueryServices.SCAN_CACHE_SIZE_ATTRIB, 
Integer.toString(3));
+setUpTestDriver(new ReadOnlyProps(serverProps.entrySet().iterator()), 
new ReadOnlyProps(clientProps.entrySet().iterator()));
+}
+
+public static class TestRegionObserver extends BaseRegionObserver {
+
+@Override
+public boolean postScannerNext(final 
ObserverContext c,
+   final InternalScanner s, final 
List results, final int limit,
+   final boolean hasMore) throws 
IOException {
+Region region = c.getEnvironment().getRegion();
+String tableName = 
region.getRegionInfo().getTable().getNameAsString();
+if (tableName.startsWith(SPLIT_TABLE_NAME_PREFIX) && 
results.size()>1) {
+int pk = 
(Integer)PInteger.INSTANCE.toObject(results.get(0).getRow());
+// split when row 10 is read
+if (pk==10 && !tableWasSplitDuringScannerNext) {
+try {
+// split on the first row being scanned if splitPoint 
is null
+splitPoint = splitPoint!=null ? splitPoint : 
results.get(0).getRow();
+splitTable(splitPoint, tableName);
+tableWasSplitDuringScannerNext = true;
+}
+catch (SQLException e) {
+throw new IOException(e);
+}
+}
+}
+return hasMore;
+}
+
+}
+
+public static void splitTable(byte[] splitPoint, String tableName) throws 
SQLException, IOException {
+HBaseAdmin admin = driver.getConnectionQueryServices(getUrl(), 
TestUtil.TEST_PROPERTIES).getAdmin();
+int nRegions = admin.getTableRegions(tableName.getBytes()

phoenix git commit: PHOENIX-4849 Phoenix may generate incorrectly replace TableResultIterators after HBase region splits.

2018-09-27 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/4.14-HBase-0.98 9e7ea88a4 -> 0700dcdb4


PHOENIX-4849 Phoenix may generate incorrectly replace TableResultIterators 
after HBase region splits.


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/0700dcdb
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/0700dcdb
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/0700dcdb

Branch: refs/heads/4.14-HBase-0.98
Commit: 0700dcdb4435385fe6018bee55eec332b37fc010
Parents: 9e7ea88
Author: Lars Hofhansl 
Authored: Wed Sep 26 11:18:05 2018 -0700
Committer: Vincent Poon 
Committed: Thu Sep 27 15:38:56 2018 -0700

--
 .../end2end/UpsertSelectAutoCommitIT.java   | 22 ++---
 .../apache/phoenix/compile/UpsertCompiler.java  | 17 +++
 .../phoenix/coprocessor/ScanRegionObserver.java |  9 +++-
 .../phoenix/iterate/TableResultIterator.java| 50 +++-
 .../java/org/apache/phoenix/util/ScanUtil.java  |  8 
 5 files changed, 63 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/0700dcdb/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
index 38d48d6..be3b8dc 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
@@ -34,10 +34,13 @@ import java.sql.SQLException;
 import java.sql.Statement;
 import java.util.Properties;
 
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.TestUtil;
 import org.junit.Test;
 
 
@@ -161,19 +164,24 @@ public class UpsertSelectAutoCommitIT extends 
ParallelStatsDisabledIT {
 props.setProperty(QueryServices.SCAN_RESULT_CHUNK_SIZE, 
Integer.toString(3));
 Connection conn = DriverManager.getConnection(getUrl(), props);
 conn.setAutoCommit(true);
-conn.createStatement().execute("CREATE SEQUENCE keys");
+conn.createStatement().execute("CREATE SEQUENCE keys CACHE 1000");
 String tableName = generateUniqueName();
-conn.createStatement().execute(
-"CREATE TABLE " + tableName + " (pk INTEGER PRIMARY KEY, val 
INTEGER)");
+conn.createStatement().execute("CREATE TABLE " + tableName
++ " (pk INTEGER PRIMARY KEY, val INTEGER) 
UPDATE_CACHE_FREQUENCY=360");
 
 conn.createStatement().execute(
 "UPSERT INTO " + tableName + " VALUES (NEXT VALUE FOR keys,1)");
-for (int i=0; i<6; i++) {
-Statement stmt = conn.createStatement();
-int upsertCount = stmt.executeUpdate(
-"UPSERT INTO " + tableName + " SELECT NEXT VALUE FOR keys, val 
FROM " + tableName);
+PreparedStatement stmt =
+conn.prepareStatement("UPSERT INTO " + tableName
++ " SELECT NEXT VALUE FOR keys, val FROM " + 
tableName);
+HBaseAdmin admin =
+driver.getConnectionQueryServices(getUrl(), 
TestUtil.TEST_PROPERTIES).getAdmin();
+for (int i=0; i<12; i++) {
+admin.split(tableName);
+int upsertCount = stmt.executeUpdate();
 assertEquals((int)Math.pow(2, i), upsertCount);
 }
+admin.close();
 conn.close();
 }
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/0700dcdb/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
index 03db29c..4005ce7 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
@@ -76,12 +76,14 @@ import org.apache.phoenix.parse.SelectStatement;
 import org.apache.phoenix.parse.SequenceValueParseNode;
 import org.apache.phoenix.parse.UpsertStatement;
 import org.apache.phoenix.query.ConnectionQueryServices;
+import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.schema.ColumnRef;
 import org.ap

phoenix git commit: PHOENIX-4849 Phoenix may generate incorrectly replace TableResultIterators after HBase region splits.

2018-09-27 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/4.14-HBase-1.2 e48df2277 -> 16613da6f


PHOENIX-4849 Phoenix may generate incorrectly replace TableResultIterators 
after HBase region splits.


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/16613da6
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/16613da6
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/16613da6

Branch: refs/heads/4.14-HBase-1.2
Commit: 16613da6f8abd0dc03eb6efde65549b0c64841da
Parents: e48df22
Author: Lars Hofhansl 
Authored: Wed Sep 26 11:18:05 2018 -0700
Committer: Vincent Poon 
Committed: Thu Sep 27 14:59:19 2018 -0700

--
 .../end2end/UpsertSelectAutoCommitIT.java   | 22 ++---
 .../apache/phoenix/compile/UpsertCompiler.java  | 17 +++
 .../phoenix/coprocessor/ScanRegionObserver.java |  9 +++-
 .../phoenix/iterate/TableResultIterator.java| 50 +++-
 .../java/org/apache/phoenix/util/ScanUtil.java  |  8 
 5 files changed, 63 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/16613da6/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
index 38d48d6..d81c2d0 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
@@ -34,10 +34,13 @@ import java.sql.SQLException;
 import java.sql.Statement;
 import java.util.Properties;
 
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.TestUtil;
 import org.junit.Test;
 
 
@@ -161,19 +164,24 @@ public class UpsertSelectAutoCommitIT extends 
ParallelStatsDisabledIT {
 props.setProperty(QueryServices.SCAN_RESULT_CHUNK_SIZE, 
Integer.toString(3));
 Connection conn = DriverManager.getConnection(getUrl(), props);
 conn.setAutoCommit(true);
-conn.createStatement().execute("CREATE SEQUENCE keys");
+conn.createStatement().execute("CREATE SEQUENCE keys CACHE 1000");
 String tableName = generateUniqueName();
-conn.createStatement().execute(
-"CREATE TABLE " + tableName + " (pk INTEGER PRIMARY KEY, val 
INTEGER)");
+conn.createStatement().execute("CREATE TABLE " + tableName
++ " (pk INTEGER PRIMARY KEY, val INTEGER) 
UPDATE_CACHE_FREQUENCY=360");
 
 conn.createStatement().execute(
 "UPSERT INTO " + tableName + " VALUES (NEXT VALUE FOR keys,1)");
-for (int i=0; i<6; i++) {
-Statement stmt = conn.createStatement();
-int upsertCount = stmt.executeUpdate(
-"UPSERT INTO " + tableName + " SELECT NEXT VALUE FOR keys, val 
FROM " + tableName);
+PreparedStatement stmt =
+conn.prepareStatement("UPSERT INTO " + tableName
++ " SELECT NEXT VALUE FOR keys, val FROM " + 
tableName);
+HBaseAdmin admin =
+driver.getConnectionQueryServices(getUrl(), 
TestUtil.TEST_PROPERTIES).getAdmin();
+for (int i=0; i<12; i++) {
+admin.split(TableName.valueOf(tableName));
+int upsertCount = stmt.executeUpdate();
 assertEquals((int)Math.pow(2, i), upsertCount);
 }
+admin.close();
 conn.close();
 }
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/16613da6/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
index c3cfa10..fb1169d 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
@@ -76,12 +76,14 @@ import org.apache.phoenix.parse.SelectStatement;
 import org.apache.phoenix.parse.SequenceValueParseNode;
 import org.apache.phoenix.parse.UpsertStatement;
 import org.apache.phoenix.query.ConnectionQueryServices;
+import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.schema.ColumnRe

phoenix git commit: PHOENIX-4849 Phoenix may generate incorrectly replace TableResultIterators after HBase region splits.

2018-09-27 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/4.14-HBase-1.1 1ec88316a -> ecce879eb


PHOENIX-4849 Phoenix may generate incorrectly replace TableResultIterators 
after HBase region splits.


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/ecce879e
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/ecce879e
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/ecce879e

Branch: refs/heads/4.14-HBase-1.1
Commit: ecce879eb6aef0140a87a52dea815d0afc77e122
Parents: 1ec8831
Author: Lars Hofhansl 
Authored: Wed Sep 26 11:18:05 2018 -0700
Committer: Vincent Poon 
Committed: Thu Sep 27 14:59:34 2018 -0700

--
 .../end2end/UpsertSelectAutoCommitIT.java   | 22 ++---
 .../apache/phoenix/compile/UpsertCompiler.java  | 17 +++
 .../phoenix/coprocessor/ScanRegionObserver.java |  9 +++-
 .../phoenix/iterate/TableResultIterator.java| 50 +++-
 .../java/org/apache/phoenix/util/ScanUtil.java  |  8 
 5 files changed, 63 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/ecce879e/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
index 38d48d6..d81c2d0 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
@@ -34,10 +34,13 @@ import java.sql.SQLException;
 import java.sql.Statement;
 import java.util.Properties;
 
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.TestUtil;
 import org.junit.Test;
 
 
@@ -161,19 +164,24 @@ public class UpsertSelectAutoCommitIT extends 
ParallelStatsDisabledIT {
 props.setProperty(QueryServices.SCAN_RESULT_CHUNK_SIZE, 
Integer.toString(3));
 Connection conn = DriverManager.getConnection(getUrl(), props);
 conn.setAutoCommit(true);
-conn.createStatement().execute("CREATE SEQUENCE keys");
+conn.createStatement().execute("CREATE SEQUENCE keys CACHE 1000");
 String tableName = generateUniqueName();
-conn.createStatement().execute(
-"CREATE TABLE " + tableName + " (pk INTEGER PRIMARY KEY, val 
INTEGER)");
+conn.createStatement().execute("CREATE TABLE " + tableName
++ " (pk INTEGER PRIMARY KEY, val INTEGER) 
UPDATE_CACHE_FREQUENCY=360");
 
 conn.createStatement().execute(
 "UPSERT INTO " + tableName + " VALUES (NEXT VALUE FOR keys,1)");
-for (int i=0; i<6; i++) {
-Statement stmt = conn.createStatement();
-int upsertCount = stmt.executeUpdate(
-"UPSERT INTO " + tableName + " SELECT NEXT VALUE FOR keys, val 
FROM " + tableName);
+PreparedStatement stmt =
+conn.prepareStatement("UPSERT INTO " + tableName
++ " SELECT NEXT VALUE FOR keys, val FROM " + 
tableName);
+HBaseAdmin admin =
+driver.getConnectionQueryServices(getUrl(), 
TestUtil.TEST_PROPERTIES).getAdmin();
+for (int i=0; i<12; i++) {
+admin.split(TableName.valueOf(tableName));
+int upsertCount = stmt.executeUpdate();
 assertEquals((int)Math.pow(2, i), upsertCount);
 }
+admin.close();
 conn.close();
 }
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/ecce879e/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
index c3cfa10..fb1169d 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
@@ -76,12 +76,14 @@ import org.apache.phoenix.parse.SelectStatement;
 import org.apache.phoenix.parse.SequenceValueParseNode;
 import org.apache.phoenix.parse.UpsertStatement;
 import org.apache.phoenix.query.ConnectionQueryServices;
+import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.schema.ColumnRe

phoenix git commit: PHOENIX-4849 Phoenix may generate incorrectly replace TableResultIterators after HBase region splits.

2018-09-27 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/4.14-HBase-1.3 632d2c605 -> 7e0268591


PHOENIX-4849 Phoenix may generate incorrectly replace TableResultIterators 
after HBase region splits.


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/7e026859
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/7e026859
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/7e026859

Branch: refs/heads/4.14-HBase-1.3
Commit: 7e02685919d1cf8964dafbfb87d7eee357b524de
Parents: 632d2c6
Author: Lars Hofhansl 
Authored: Wed Sep 26 11:18:05 2018 -0700
Committer: Vincent Poon 
Committed: Thu Sep 27 14:59:04 2018 -0700

--
 .../end2end/UpsertSelectAutoCommitIT.java   | 22 ++---
 .../apache/phoenix/compile/UpsertCompiler.java  | 17 +++
 .../phoenix/coprocessor/ScanRegionObserver.java |  9 +++-
 .../phoenix/iterate/TableResultIterator.java| 50 +++-
 .../java/org/apache/phoenix/util/ScanUtil.java  |  8 
 5 files changed, 63 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/7e026859/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
index 38d48d6..d81c2d0 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
@@ -34,10 +34,13 @@ import java.sql.SQLException;
 import java.sql.Statement;
 import java.util.Properties;
 
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.TestUtil;
 import org.junit.Test;
 
 
@@ -161,19 +164,24 @@ public class UpsertSelectAutoCommitIT extends 
ParallelStatsDisabledIT {
 props.setProperty(QueryServices.SCAN_RESULT_CHUNK_SIZE, 
Integer.toString(3));
 Connection conn = DriverManager.getConnection(getUrl(), props);
 conn.setAutoCommit(true);
-conn.createStatement().execute("CREATE SEQUENCE keys");
+conn.createStatement().execute("CREATE SEQUENCE keys CACHE 1000");
 String tableName = generateUniqueName();
-conn.createStatement().execute(
-"CREATE TABLE " + tableName + " (pk INTEGER PRIMARY KEY, val 
INTEGER)");
+conn.createStatement().execute("CREATE TABLE " + tableName
++ " (pk INTEGER PRIMARY KEY, val INTEGER) 
UPDATE_CACHE_FREQUENCY=360");
 
 conn.createStatement().execute(
 "UPSERT INTO " + tableName + " VALUES (NEXT VALUE FOR keys,1)");
-for (int i=0; i<6; i++) {
-Statement stmt = conn.createStatement();
-int upsertCount = stmt.executeUpdate(
-"UPSERT INTO " + tableName + " SELECT NEXT VALUE FOR keys, val 
FROM " + tableName);
+PreparedStatement stmt =
+conn.prepareStatement("UPSERT INTO " + tableName
++ " SELECT NEXT VALUE FOR keys, val FROM " + 
tableName);
+HBaseAdmin admin =
+driver.getConnectionQueryServices(getUrl(), 
TestUtil.TEST_PROPERTIES).getAdmin();
+for (int i=0; i<12; i++) {
+admin.split(TableName.valueOf(tableName));
+int upsertCount = stmt.executeUpdate();
 assertEquals((int)Math.pow(2, i), upsertCount);
 }
+admin.close();
 conn.close();
 }
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/7e026859/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
index c3cfa10..fb1169d 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
@@ -76,12 +76,14 @@ import org.apache.phoenix.parse.SelectStatement;
 import org.apache.phoenix.parse.SequenceValueParseNode;
 import org.apache.phoenix.parse.UpsertStatement;
 import org.apache.phoenix.query.ConnectionQueryServices;
+import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.schema.ColumnRe

phoenix git commit: PHOENIX-4849 Phoenix may generate incorrectly replace TableResultIterators after HBase region splits.

2018-09-27 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/4.14-HBase-1.4 d101141fa -> 256d142f2


PHOENIX-4849 Phoenix may generate incorrectly replace TableResultIterators 
after HBase region splits.


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/256d142f
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/256d142f
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/256d142f

Branch: refs/heads/4.14-HBase-1.4
Commit: 256d142f24820e8467cada06d3403b62ccb96ba0
Parents: d101141
Author: Lars Hofhansl 
Authored: Wed Sep 26 11:18:05 2018 -0700
Committer: Vincent Poon 
Committed: Thu Sep 27 14:56:03 2018 -0700

--
 .../end2end/UpsertSelectAutoCommitIT.java   | 22 ++---
 .../apache/phoenix/compile/UpsertCompiler.java  | 17 +++
 .../phoenix/coprocessor/ScanRegionObserver.java |  9 +++-
 .../phoenix/iterate/TableResultIterator.java| 50 +++-
 .../java/org/apache/phoenix/util/ScanUtil.java  |  8 
 5 files changed, 63 insertions(+), 43 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/256d142f/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
index 38d48d6..d81c2d0 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
@@ -34,10 +34,13 @@ import java.sql.SQLException;
 import java.sql.Statement;
 import java.util.Properties;
 
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.TestUtil;
 import org.junit.Test;
 
 
@@ -161,19 +164,24 @@ public class UpsertSelectAutoCommitIT extends 
ParallelStatsDisabledIT {
 props.setProperty(QueryServices.SCAN_RESULT_CHUNK_SIZE, 
Integer.toString(3));
 Connection conn = DriverManager.getConnection(getUrl(), props);
 conn.setAutoCommit(true);
-conn.createStatement().execute("CREATE SEQUENCE keys");
+conn.createStatement().execute("CREATE SEQUENCE keys CACHE 1000");
 String tableName = generateUniqueName();
-conn.createStatement().execute(
-"CREATE TABLE " + tableName + " (pk INTEGER PRIMARY KEY, val 
INTEGER)");
+conn.createStatement().execute("CREATE TABLE " + tableName
++ " (pk INTEGER PRIMARY KEY, val INTEGER) 
UPDATE_CACHE_FREQUENCY=360");
 
 conn.createStatement().execute(
 "UPSERT INTO " + tableName + " VALUES (NEXT VALUE FOR keys,1)");
-for (int i=0; i<6; i++) {
-Statement stmt = conn.createStatement();
-int upsertCount = stmt.executeUpdate(
-"UPSERT INTO " + tableName + " SELECT NEXT VALUE FOR keys, val 
FROM " + tableName);
+PreparedStatement stmt =
+conn.prepareStatement("UPSERT INTO " + tableName
++ " SELECT NEXT VALUE FOR keys, val FROM " + 
tableName);
+HBaseAdmin admin =
+driver.getConnectionQueryServices(getUrl(), 
TestUtil.TEST_PROPERTIES).getAdmin();
+for (int i=0; i<12; i++) {
+admin.split(TableName.valueOf(tableName));
+int upsertCount = stmt.executeUpdate();
 assertEquals((int)Math.pow(2, i), upsertCount);
 }
+admin.close();
 conn.close();
 }
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/256d142f/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
index c3cfa10..fb1169d 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java
@@ -76,12 +76,14 @@ import org.apache.phoenix.parse.SelectStatement;
 import org.apache.phoenix.parse.SequenceValueParseNode;
 import org.apache.phoenix.parse.UpsertStatement;
 import org.apache.phoenix.query.ConnectionQueryServices;
+import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.schema.ColumnRe

[1/2] phoenix git commit: PHOENIX-4839 IndexHalfStoreFileReaderGenerator throws NullPointerException(Aman Poonia)

2018-09-27 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/4.14-HBase-0.98 a7b0350a4 -> 9e7ea88a4


PHOENIX-4839 IndexHalfStoreFileReaderGenerator throws NullPointerException(Aman 
Poonia)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/501ce1f5
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/501ce1f5
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/501ce1f5

Branch: refs/heads/4.14-HBase-0.98
Commit: 501ce1f5ddb01e91e7267f31861b8e5eeb88e1ea
Parents: a7b0350
Author: Ankit Singhal 
Authored: Tue Aug 21 11:54:01 2018 -0700
Committer: Vincent Poon 
Committed: Thu Sep 27 14:31:59 2018 -0700

--
 .../regionserver/IndexHalfStoreFileReader.java  |   6 +
 .../IndexHalfStoreFileReaderGenerator.java  | 137 ++-
 2 files changed, 18 insertions(+), 125 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/501ce1f5/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
index d1d12fb..8bd0d72 100644
--- 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
+++ 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
@@ -123,4 +123,10 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
 public boolean isTop() {
 return top;
 }
+
+@Override
+public StoreFileScanner getStoreFileScanner(boolean cacheBlocks, boolean 
pread, boolean isCompaction, long readPt) {
+return new LocalIndexStoreFileScanner(this, getScanner(cacheBlocks, 
pread, isCompaction), true,
+getHFileReader().hasMVCCInfo(), readPt);
+}
 }
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/phoenix/blob/501ce1f5/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
index 037b299..18d7228 100644
--- 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
+++ 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
@@ -17,12 +17,8 @@
  */
 package org.apache.hadoop.hbase.regionserver;
 
-import static 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.SCAN_START_ROW_SUFFIX;
-
 import java.io.IOException;
 import java.sql.SQLException;
-import java.util.ArrayList;
-import java.util.Collection;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
@@ -72,7 +68,7 @@ import org.apache.phoenix.util.RepairUtil;
 import com.google.common.collect.Lists;
 
 public class IndexHalfStoreFileReaderGenerator extends BaseRegionObserver {
-
+
 private static final String LOCAL_INDEX_AUTOMATIC_REPAIR = 
"local.index.automatic.repair";
 public static final Log LOG = 
LogFactory.getLog(IndexHalfStoreFileReaderGenerator.class);
 
@@ -152,7 +148,9 @@ public class IndexHalfStoreFileReaderGenerator extends 
BaseRegionObserver {
 try {
 conn = 
QueryUtil.getConnectionOnServer(ctx.getEnvironment().getConfiguration()).unwrap(
 PhoenixConnection.class);
-PTable dataTable = IndexUtil.getPDataTable(conn, 
ctx.getEnvironment().getRegion().getTableDesc());
+PTable dataTable =
+IndexUtil.getPDataTable(conn, 
ctx.getEnvironment().getRegion()
+.getTableDesc());
 List indexes = dataTable.getIndexes();
 Map indexMaintainers =
 new HashMap();
@@ -186,19 +184,12 @@ public class IndexHalfStoreFileReaderGenerator extends 
BaseRegionObserver {
 return reader;
 }
 
-@SuppressWarnings("deprecation")
 @Override
-public InternalScanner 
preCompactScannerOpen(ObserverContext c,
-Store store, List scanners, ScanType 
scanType,
-long earliestPutTs, InternalScanner s, CompactionRequest request) 
throws IOException {
+public InternalScanner preCompact(
+ObserverContext c, Store store,
+InternalScanner s, ScanType scanType,
+CompactionRequest request) throws IOException {
 if (!IndexUtil.isLocalIndexStore(store)) { return

[2/2] phoenix git commit: PHOENIX-4839 IndexHalfStoreFileReaderGenerator throws NullPointerException (Aman Poonia)

2018-09-27 Thread vincentpoon
PHOENIX-4839 IndexHalfStoreFileReaderGenerator throws NullPointerException 
(Aman Poonia)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/9e7ea88a
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/9e7ea88a
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/9e7ea88a

Branch: refs/heads/4.14-HBase-0.98
Commit: 9e7ea88a40a966fc4ce3d19a02ed63116150f870
Parents: 501ce1f
Author: Lars Hofhansl 
Authored: Fri Sep 14 12:40:06 2018 -0700
Committer: Vincent Poon 
Committed: Thu Sep 27 14:33:00 2018 -0700

--
 .../regionserver/IndexHalfStoreFileReader.java  | 48 
 .../IndexHalfStoreFileReaderGenerator.java  | 12 ++---
 2 files changed, 43 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/9e7ea88a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
index 8bd0d72..273a1b0 100644
--- 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
+++ 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.hbase.regionserver;
 
+import static 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.SCAN_START_ROW_SUFFIX;
+
 import java.io.IOException;
 import java.util.Map;
 
@@ -26,10 +28,12 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.CellUtil;
 import org.apache.hadoop.hbase.HRegionInfo;
 import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.io.FSDataInputStreamWrapper;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.io.Reference;
 import org.apache.hadoop.hbase.io.hfile.CacheConfig;
+import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.index.IndexMaintainer;
 
 /**
@@ -56,8 +60,9 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
 private final Map 
indexMaintainers;
 private final byte[][] viewConstants;
 private final int offset;
-private final HRegionInfo regionInfo;
+private final HRegionInfo childRegionInfo;
 private final byte[] regionStartKeyInHFile;
+private final HRegionInfo currentRegion;
 
 /**
  * @param fs
@@ -69,17 +74,19 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
  * @param conf
  * @param indexMaintainers
  * @param viewConstants
- * @param regionInfo
+ * @param childRegionInfo
  * @param regionStartKeyInHFile
  * @param splitKey
+ * @param currentRegion
  * @throws IOException
  */
 public IndexHalfStoreFileReader(final FileSystem fs, final Path p, final 
CacheConfig cacheConf,
 final FSDataInputStreamWrapper in, long size, final Reference r,
 final Configuration conf,
 final Map 
indexMaintainers,
-final byte[][] viewConstants, final HRegionInfo regionInfo,
-byte[] regionStartKeyInHFile, byte[] splitKey) throws IOException {
+final byte[][] viewConstants, final HRegionInfo childRegionInfo,
+byte[] regionStartKeyInHFile, byte[] splitKey, HRegionInfo 
currentRegion)
+throws IOException {
 super(fs, p, in, size, cacheConf, conf);
 this.splitkey = splitKey == null ? r.getSplitKey() : splitKey;
 // Is it top or bottom half?
@@ -87,9 +94,10 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
 this.splitRow = 
CellUtil.cloneRow(KeyValue.createKeyValueFromKey(splitkey));
 this.indexMaintainers = indexMaintainers;
 this.viewConstants = viewConstants;
-this.regionInfo = regionInfo;
+this.childRegionInfo = childRegionInfo;
 this.regionStartKeyInHFile = regionStartKeyInHFile;
 this.offset = regionStartKeyInHFile.length;
+this.currentRegion = currentRegion;
 }
 
 public int getOffset() {
@@ -105,7 +113,7 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
 }
 
 public HRegionInfo getRegionInfo() {
-return regionInfo;
+return childRegionInfo;
 }
 
 public byte[] getRegionStartKeyInHFile() {
@@ -125,8 +133,30 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
 }
 
 @Override
-public StoreFileScanner getStoreFileScanner(boolean cacheBlocks, boolean 
pread, boolean isCompaction, long readPt) {
-return new LocalIndexStoreFileScanner(this, ge

[1/2] phoenix git commit: PHOENIX-4839 IndexHalfStoreFileReaderGenerator throws NullPointerException(Aman Poonia)

2018-09-27 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/4.14-HBase-1.1 3a172b2cb -> 1ec88316a


PHOENIX-4839 IndexHalfStoreFileReaderGenerator throws NullPointerException(Aman 
Poonia)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/961e8086
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/961e8086
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/961e8086

Branch: refs/heads/4.14-HBase-1.1
Commit: 961e8086831fbf392dfae535219fad12062dbe40
Parents: 3a172b2
Author: Ankit Singhal 
Authored: Tue Aug 21 11:54:01 2018 -0700
Committer: Vincent Poon 
Committed: Thu Sep 27 14:17:37 2018 -0700

--
 .../regionserver/IndexHalfStoreFileReader.java  |   6 +
 .../IndexHalfStoreFileReaderGenerator.java  | 138 ++-
 2 files changed, 18 insertions(+), 126 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/961e8086/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
index d1d12fb..8bd0d72 100644
--- 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
+++ 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
@@ -123,4 +123,10 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
 public boolean isTop() {
 return top;
 }
+
+@Override
+public StoreFileScanner getStoreFileScanner(boolean cacheBlocks, boolean 
pread, boolean isCompaction, long readPt) {
+return new LocalIndexStoreFileScanner(this, getScanner(cacheBlocks, 
pread, isCompaction), true,
+getHFileReader().hasMVCCInfo(), readPt);
+}
 }
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/phoenix/blob/961e8086/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
index e41086b..ab65456 100644
--- 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
+++ 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
@@ -17,16 +17,11 @@
  */
 package org.apache.hadoop.hbase.regionserver;
 
-import static 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.SCAN_START_ROW_SUFFIX;
-
 import java.io.IOException;
 import java.sql.SQLException;
-import java.util.ArrayList;
-import java.util.Collection;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
-import java.util.NavigableSet;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -71,7 +66,7 @@ import org.apache.phoenix.util.RepairUtil;
 import com.google.common.collect.Lists;
 
 public class IndexHalfStoreFileReaderGenerator extends BaseRegionObserver {
-
+
 private static final String LOCAL_INDEX_AUTOMATIC_REPAIR = 
"local.index.automatic.repair";
 public static final Log LOG = 
LogFactory.getLog(IndexHalfStoreFileReaderGenerator.class);
 
@@ -153,7 +148,9 @@ public class IndexHalfStoreFileReaderGenerator extends 
BaseRegionObserver {
 try {
 conn = 
QueryUtil.getConnectionOnServer(ctx.getEnvironment().getConfiguration()).unwrap(
 PhoenixConnection.class);
-PTable dataTable = IndexUtil.getPDataTable(conn, 
ctx.getEnvironment().getRegion().getTableDesc());
+PTable dataTable =
+IndexUtil.getPDataTable(conn, 
ctx.getEnvironment().getRegion()
+.getTableDesc());
 List indexes = dataTable.getIndexes();
 Map indexMaintainers =
 new HashMap();
@@ -187,19 +184,12 @@ public class IndexHalfStoreFileReaderGenerator extends 
BaseRegionObserver {
 return reader;
 }
 
-@SuppressWarnings("deprecation")
 @Override
-public InternalScanner 
preCompactScannerOpen(ObserverContext c,
-Store store, List scanners, ScanType 
scanType,
-long earliestPutTs, InternalScanner s, CompactionRequest request) 
throws IOException {
+public InternalScanner preCompact(
+ObserverContext c, Store store,
+InternalScanner s, ScanType scanType,

[2/2] phoenix git commit: PHOENIX-4839 IndexHalfStoreFileReaderGenerator throws NullPointerException (Aman Poonia)

2018-09-27 Thread vincentpoon
PHOENIX-4839 IndexHalfStoreFileReaderGenerator throws NullPointerException 
(Aman Poonia)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/1ec88316
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/1ec88316
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/1ec88316

Branch: refs/heads/4.14-HBase-1.1
Commit: 1ec88316a720dd1706d17f5cb28e3e32a8a0d9d0
Parents: 961e808
Author: Lars Hofhansl 
Authored: Fri Sep 14 12:40:06 2018 -0700
Committer: Vincent Poon 
Committed: Thu Sep 27 14:17:41 2018 -0700

--
 .../regionserver/IndexHalfStoreFileReader.java  | 48 
 .../IndexHalfStoreFileReaderGenerator.java  | 12 ++---
 2 files changed, 43 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/1ec88316/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
index 8bd0d72..273a1b0 100644
--- 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
+++ 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.hbase.regionserver;
 
+import static 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.SCAN_START_ROW_SUFFIX;
+
 import java.io.IOException;
 import java.util.Map;
 
@@ -26,10 +28,12 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.CellUtil;
 import org.apache.hadoop.hbase.HRegionInfo;
 import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.io.FSDataInputStreamWrapper;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.io.Reference;
 import org.apache.hadoop.hbase.io.hfile.CacheConfig;
+import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.index.IndexMaintainer;
 
 /**
@@ -56,8 +60,9 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
 private final Map 
indexMaintainers;
 private final byte[][] viewConstants;
 private final int offset;
-private final HRegionInfo regionInfo;
+private final HRegionInfo childRegionInfo;
 private final byte[] regionStartKeyInHFile;
+private final HRegionInfo currentRegion;
 
 /**
  * @param fs
@@ -69,17 +74,19 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
  * @param conf
  * @param indexMaintainers
  * @param viewConstants
- * @param regionInfo
+ * @param childRegionInfo
  * @param regionStartKeyInHFile
  * @param splitKey
+ * @param currentRegion
  * @throws IOException
  */
 public IndexHalfStoreFileReader(final FileSystem fs, final Path p, final 
CacheConfig cacheConf,
 final FSDataInputStreamWrapper in, long size, final Reference r,
 final Configuration conf,
 final Map 
indexMaintainers,
-final byte[][] viewConstants, final HRegionInfo regionInfo,
-byte[] regionStartKeyInHFile, byte[] splitKey) throws IOException {
+final byte[][] viewConstants, final HRegionInfo childRegionInfo,
+byte[] regionStartKeyInHFile, byte[] splitKey, HRegionInfo 
currentRegion)
+throws IOException {
 super(fs, p, in, size, cacheConf, conf);
 this.splitkey = splitKey == null ? r.getSplitKey() : splitKey;
 // Is it top or bottom half?
@@ -87,9 +94,10 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
 this.splitRow = 
CellUtil.cloneRow(KeyValue.createKeyValueFromKey(splitkey));
 this.indexMaintainers = indexMaintainers;
 this.viewConstants = viewConstants;
-this.regionInfo = regionInfo;
+this.childRegionInfo = childRegionInfo;
 this.regionStartKeyInHFile = regionStartKeyInHFile;
 this.offset = regionStartKeyInHFile.length;
+this.currentRegion = currentRegion;
 }
 
 public int getOffset() {
@@ -105,7 +113,7 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
 }
 
 public HRegionInfo getRegionInfo() {
-return regionInfo;
+return childRegionInfo;
 }
 
 public byte[] getRegionStartKeyInHFile() {
@@ -125,8 +133,30 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
 }
 
 @Override
-public StoreFileScanner getStoreFileScanner(boolean cacheBlocks, boolean 
pread, boolean isCompaction, long readPt) {
-return new LocalIndexStoreFileScanner(this, get

[1/2] phoenix git commit: PHOENIX-4839 IndexHalfStoreFileReaderGenerator throws NullPointerException(Aman Poonia)

2018-09-27 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/4.14-HBase-1.2 197b748d1 -> e48df2277


PHOENIX-4839 IndexHalfStoreFileReaderGenerator throws NullPointerException(Aman 
Poonia)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/7a5e5205
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/7a5e5205
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/7a5e5205

Branch: refs/heads/4.14-HBase-1.2
Commit: 7a5e52051c96c10386e1f7a0f2e37661eeb351d6
Parents: 197b748
Author: Ankit Singhal 
Authored: Tue Aug 21 11:54:01 2018 -0700
Committer: Vincent Poon 
Committed: Thu Sep 27 14:14:10 2018 -0700

--
 .../regionserver/IndexHalfStoreFileReader.java  |   6 +
 .../IndexHalfStoreFileReaderGenerator.java  | 138 ++-
 2 files changed, 18 insertions(+), 126 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/7a5e5205/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
index d1d12fb..8bd0d72 100644
--- 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
+++ 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
@@ -123,4 +123,10 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
 public boolean isTop() {
 return top;
 }
+
+@Override
+public StoreFileScanner getStoreFileScanner(boolean cacheBlocks, boolean 
pread, boolean isCompaction, long readPt) {
+return new LocalIndexStoreFileScanner(this, getScanner(cacheBlocks, 
pread, isCompaction), true,
+getHFileReader().hasMVCCInfo(), readPt);
+}
 }
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/phoenix/blob/7a5e5205/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
index e41086b..ab65456 100644
--- 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
+++ 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
@@ -17,16 +17,11 @@
  */
 package org.apache.hadoop.hbase.regionserver;
 
-import static 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.SCAN_START_ROW_SUFFIX;
-
 import java.io.IOException;
 import java.sql.SQLException;
-import java.util.ArrayList;
-import java.util.Collection;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
-import java.util.NavigableSet;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -71,7 +66,7 @@ import org.apache.phoenix.util.RepairUtil;
 import com.google.common.collect.Lists;
 
 public class IndexHalfStoreFileReaderGenerator extends BaseRegionObserver {
-
+
 private static final String LOCAL_INDEX_AUTOMATIC_REPAIR = 
"local.index.automatic.repair";
 public static final Log LOG = 
LogFactory.getLog(IndexHalfStoreFileReaderGenerator.class);
 
@@ -153,7 +148,9 @@ public class IndexHalfStoreFileReaderGenerator extends 
BaseRegionObserver {
 try {
 conn = 
QueryUtil.getConnectionOnServer(ctx.getEnvironment().getConfiguration()).unwrap(
 PhoenixConnection.class);
-PTable dataTable = IndexUtil.getPDataTable(conn, 
ctx.getEnvironment().getRegion().getTableDesc());
+PTable dataTable =
+IndexUtil.getPDataTable(conn, 
ctx.getEnvironment().getRegion()
+.getTableDesc());
 List indexes = dataTable.getIndexes();
 Map indexMaintainers =
 new HashMap();
@@ -187,19 +184,12 @@ public class IndexHalfStoreFileReaderGenerator extends 
BaseRegionObserver {
 return reader;
 }
 
-@SuppressWarnings("deprecation")
 @Override
-public InternalScanner 
preCompactScannerOpen(ObserverContext c,
-Store store, List scanners, ScanType 
scanType,
-long earliestPutTs, InternalScanner s, CompactionRequest request) 
throws IOException {
+public InternalScanner preCompact(
+ObserverContext c, Store store,
+InternalScanner s, ScanType scanType,

[2/2] phoenix git commit: PHOENIX-4839 IndexHalfStoreFileReaderGenerator throws NullPointerException (Aman Poonia)

2018-09-27 Thread vincentpoon
PHOENIX-4839 IndexHalfStoreFileReaderGenerator throws NullPointerException 
(Aman Poonia)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/e48df227
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/e48df227
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/e48df227

Branch: refs/heads/4.14-HBase-1.2
Commit: e48df2277449bc5ab3a8ad10a25d464174669716
Parents: 7a5e520
Author: Lars Hofhansl 
Authored: Fri Sep 14 12:40:06 2018 -0700
Committer: Vincent Poon 
Committed: Thu Sep 27 14:15:01 2018 -0700

--
 .../regionserver/IndexHalfStoreFileReader.java  | 48 
 .../IndexHalfStoreFileReaderGenerator.java  | 12 ++---
 2 files changed, 43 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/e48df227/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
index 8bd0d72..273a1b0 100644
--- 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
+++ 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.hbase.regionserver;
 
+import static 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.SCAN_START_ROW_SUFFIX;
+
 import java.io.IOException;
 import java.util.Map;
 
@@ -26,10 +28,12 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.CellUtil;
 import org.apache.hadoop.hbase.HRegionInfo;
 import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.io.FSDataInputStreamWrapper;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.io.Reference;
 import org.apache.hadoop.hbase.io.hfile.CacheConfig;
+import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.index.IndexMaintainer;
 
 /**
@@ -56,8 +60,9 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
 private final Map 
indexMaintainers;
 private final byte[][] viewConstants;
 private final int offset;
-private final HRegionInfo regionInfo;
+private final HRegionInfo childRegionInfo;
 private final byte[] regionStartKeyInHFile;
+private final HRegionInfo currentRegion;
 
 /**
  * @param fs
@@ -69,17 +74,19 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
  * @param conf
  * @param indexMaintainers
  * @param viewConstants
- * @param regionInfo
+ * @param childRegionInfo
  * @param regionStartKeyInHFile
  * @param splitKey
+ * @param currentRegion
  * @throws IOException
  */
 public IndexHalfStoreFileReader(final FileSystem fs, final Path p, final 
CacheConfig cacheConf,
 final FSDataInputStreamWrapper in, long size, final Reference r,
 final Configuration conf,
 final Map 
indexMaintainers,
-final byte[][] viewConstants, final HRegionInfo regionInfo,
-byte[] regionStartKeyInHFile, byte[] splitKey) throws IOException {
+final byte[][] viewConstants, final HRegionInfo childRegionInfo,
+byte[] regionStartKeyInHFile, byte[] splitKey, HRegionInfo 
currentRegion)
+throws IOException {
 super(fs, p, in, size, cacheConf, conf);
 this.splitkey = splitKey == null ? r.getSplitKey() : splitKey;
 // Is it top or bottom half?
@@ -87,9 +94,10 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
 this.splitRow = 
CellUtil.cloneRow(KeyValue.createKeyValueFromKey(splitkey));
 this.indexMaintainers = indexMaintainers;
 this.viewConstants = viewConstants;
-this.regionInfo = regionInfo;
+this.childRegionInfo = childRegionInfo;
 this.regionStartKeyInHFile = regionStartKeyInHFile;
 this.offset = regionStartKeyInHFile.length;
+this.currentRegion = currentRegion;
 }
 
 public int getOffset() {
@@ -105,7 +113,7 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
 }
 
 public HRegionInfo getRegionInfo() {
-return regionInfo;
+return childRegionInfo;
 }
 
 public byte[] getRegionStartKeyInHFile() {
@@ -125,8 +133,30 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
 }
 
 @Override
-public StoreFileScanner getStoreFileScanner(boolean cacheBlocks, boolean 
pread, boolean isCompaction, long readPt) {
-return new LocalIndexStoreFileScanner(this, get

[2/2] phoenix git commit: PHOENIX-4839 IndexHalfStoreFileReaderGenerator throws NullPointerException (Aman Poonia)

2018-09-27 Thread vincentpoon
PHOENIX-4839 IndexHalfStoreFileReaderGenerator throws NullPointerException 
(Aman Poonia)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/632d2c60
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/632d2c60
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/632d2c60

Branch: refs/heads/4.14-HBase-1.3
Commit: 632d2c6054b957b8b33823dd5b89cc419e1095d2
Parents: bfa3e81
Author: Lars Hofhansl 
Authored: Fri Sep 14 12:38:37 2018 -0700
Committer: Vincent Poon 
Committed: Thu Sep 27 14:13:04 2018 -0700

--
 .../regionserver/IndexHalfStoreFileReader.java  | 48 
 .../IndexHalfStoreFileReaderGenerator.java  | 12 ++---
 2 files changed, 43 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/632d2c60/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
index 8bd0d72..273a1b0 100644
--- 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
+++ 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.hbase.regionserver;
 
+import static 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.SCAN_START_ROW_SUFFIX;
+
 import java.io.IOException;
 import java.util.Map;
 
@@ -26,10 +28,12 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.CellUtil;
 import org.apache.hadoop.hbase.HRegionInfo;
 import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.io.FSDataInputStreamWrapper;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.io.Reference;
 import org.apache.hadoop.hbase.io.hfile.CacheConfig;
+import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.index.IndexMaintainer;
 
 /**
@@ -56,8 +60,9 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
 private final Map 
indexMaintainers;
 private final byte[][] viewConstants;
 private final int offset;
-private final HRegionInfo regionInfo;
+private final HRegionInfo childRegionInfo;
 private final byte[] regionStartKeyInHFile;
+private final HRegionInfo currentRegion;
 
 /**
  * @param fs
@@ -69,17 +74,19 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
  * @param conf
  * @param indexMaintainers
  * @param viewConstants
- * @param regionInfo
+ * @param childRegionInfo
  * @param regionStartKeyInHFile
  * @param splitKey
+ * @param currentRegion
  * @throws IOException
  */
 public IndexHalfStoreFileReader(final FileSystem fs, final Path p, final 
CacheConfig cacheConf,
 final FSDataInputStreamWrapper in, long size, final Reference r,
 final Configuration conf,
 final Map 
indexMaintainers,
-final byte[][] viewConstants, final HRegionInfo regionInfo,
-byte[] regionStartKeyInHFile, byte[] splitKey) throws IOException {
+final byte[][] viewConstants, final HRegionInfo childRegionInfo,
+byte[] regionStartKeyInHFile, byte[] splitKey, HRegionInfo 
currentRegion)
+throws IOException {
 super(fs, p, in, size, cacheConf, conf);
 this.splitkey = splitKey == null ? r.getSplitKey() : splitKey;
 // Is it top or bottom half?
@@ -87,9 +94,10 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
 this.splitRow = 
CellUtil.cloneRow(KeyValue.createKeyValueFromKey(splitkey));
 this.indexMaintainers = indexMaintainers;
 this.viewConstants = viewConstants;
-this.regionInfo = regionInfo;
+this.childRegionInfo = childRegionInfo;
 this.regionStartKeyInHFile = regionStartKeyInHFile;
 this.offset = regionStartKeyInHFile.length;
+this.currentRegion = currentRegion;
 }
 
 public int getOffset() {
@@ -105,7 +113,7 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
 }
 
 public HRegionInfo getRegionInfo() {
-return regionInfo;
+return childRegionInfo;
 }
 
 public byte[] getRegionStartKeyInHFile() {
@@ -125,8 +133,30 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
 }
 
 @Override
-public StoreFileScanner getStoreFileScanner(boolean cacheBlocks, boolean 
pread, boolean isCompaction, long readPt) {
-return new LocalIndexStoreFileScanner(this, get

[1/2] phoenix git commit: PHOENIX-4839 IndexHalfStoreFileReaderGenerator throws NullPointerException(Aman Poonia)

2018-09-27 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/4.14-HBase-1.3 6e02182be -> 632d2c605


PHOENIX-4839 IndexHalfStoreFileReaderGenerator throws NullPointerException(Aman 
Poonia)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/bfa3e81e
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/bfa3e81e
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/bfa3e81e

Branch: refs/heads/4.14-HBase-1.3
Commit: bfa3e81e8bc2440b478c5ba0c67302692f699965
Parents: 6e02182
Author: Ankit Singhal 
Authored: Tue Aug 21 11:51:50 2018 -0700
Committer: Vincent Poon 
Committed: Thu Sep 27 14:12:43 2018 -0700

--
 .../regionserver/IndexHalfStoreFileReader.java  |   6 +
 .../IndexHalfStoreFileReaderGenerator.java  | 138 ++-
 2 files changed, 18 insertions(+), 126 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/bfa3e81e/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
index d1d12fb..8bd0d72 100644
--- 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
+++ 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
@@ -123,4 +123,10 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
 public boolean isTop() {
 return top;
 }
+
+@Override
+public StoreFileScanner getStoreFileScanner(boolean cacheBlocks, boolean 
pread, boolean isCompaction, long readPt) {
+return new LocalIndexStoreFileScanner(this, getScanner(cacheBlocks, 
pread, isCompaction), true,
+getHFileReader().hasMVCCInfo(), readPt);
+}
 }
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/phoenix/blob/bfa3e81e/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
index e41086b..ab65456 100644
--- 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
+++ 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
@@ -17,16 +17,11 @@
  */
 package org.apache.hadoop.hbase.regionserver;
 
-import static 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.SCAN_START_ROW_SUFFIX;
-
 import java.io.IOException;
 import java.sql.SQLException;
-import java.util.ArrayList;
-import java.util.Collection;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
-import java.util.NavigableSet;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -71,7 +66,7 @@ import org.apache.phoenix.util.RepairUtil;
 import com.google.common.collect.Lists;
 
 public class IndexHalfStoreFileReaderGenerator extends BaseRegionObserver {
-
+
 private static final String LOCAL_INDEX_AUTOMATIC_REPAIR = 
"local.index.automatic.repair";
 public static final Log LOG = 
LogFactory.getLog(IndexHalfStoreFileReaderGenerator.class);
 
@@ -153,7 +148,9 @@ public class IndexHalfStoreFileReaderGenerator extends 
BaseRegionObserver {
 try {
 conn = 
QueryUtil.getConnectionOnServer(ctx.getEnvironment().getConfiguration()).unwrap(
 PhoenixConnection.class);
-PTable dataTable = IndexUtil.getPDataTable(conn, 
ctx.getEnvironment().getRegion().getTableDesc());
+PTable dataTable =
+IndexUtil.getPDataTable(conn, 
ctx.getEnvironment().getRegion()
+.getTableDesc());
 List indexes = dataTable.getIndexes();
 Map indexMaintainers =
 new HashMap();
@@ -187,19 +184,12 @@ public class IndexHalfStoreFileReaderGenerator extends 
BaseRegionObserver {
 return reader;
 }
 
-@SuppressWarnings("deprecation")
 @Override
-public InternalScanner 
preCompactScannerOpen(ObserverContext c,
-Store store, List scanners, ScanType 
scanType,
-long earliestPutTs, InternalScanner s, CompactionRequest request) 
throws IOException {
+public InternalScanner preCompact(
+ObserverContext c, Store store,
+InternalScanner s, ScanType scanType,

[2/2] phoenix git commit: PHOENIX-4839 IndexHalfStoreFileReaderGenerator throws NullPointerException (Aman Poonia)

2018-09-27 Thread vincentpoon
PHOENIX-4839 IndexHalfStoreFileReaderGenerator throws NullPointerException 
(Aman Poonia)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/d101141f
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/d101141f
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/d101141f

Branch: refs/heads/4.14-HBase-1.4
Commit: d101141fa7fd2a4908f88d0034ecd475c4f6747c
Parents: 7dc6e89
Author: Lars Hofhansl 
Authored: Fri Sep 14 12:35:57 2018 -0700
Committer: Vincent Poon 
Committed: Thu Sep 27 14:01:57 2018 -0700

--
 .../regionserver/IndexHalfStoreFileReader.java  | 42 
 .../IndexHalfStoreFileReaderGenerator.java  | 20 +-
 2 files changed, 44 insertions(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/d101141f/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
index 1f3113c..e2dff03 100644
--- 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
+++ 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
@@ -17,6 +17,8 @@
  */
 package org.apache.hadoop.hbase.regionserver;
 
+import static 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.SCAN_START_ROW_SUFFIX;
+
 import java.io.IOException;
 import java.util.Map;
 
@@ -26,10 +28,12 @@ import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.CellUtil;
 import org.apache.hadoop.hbase.HRegionInfo;
 import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.io.FSDataInputStreamWrapper;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.io.Reference;
 import org.apache.hadoop.hbase.io.hfile.CacheConfig;
+import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.index.IndexMaintainer;
 
 /**
@@ -56,8 +60,9 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
 private final Map 
indexMaintainers;
 private final byte[][] viewConstants;
 private final int offset;
-private final HRegionInfo regionInfo;
+private final HRegionInfo childRegionInfo;
 private final byte[] regionStartKeyInHFile;
+private final HRegionInfo currentRegion;
 
 /**
  * @param fs
@@ -69,7 +74,7 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
  * @param conf
  * @param indexMaintainers
  * @param viewConstants
- * @param regionInfo
+ * @param childRegionInfo
  * @param regionStartKeyInHFile
  * @param splitKey
  * @throws IOException
@@ -78,8 +83,9 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
 final FSDataInputStreamWrapper in, long size, final Reference r,
 final Configuration conf,
 final Map 
indexMaintainers,
-final byte[][] viewConstants, final HRegionInfo regionInfo,
-byte[] regionStartKeyInHFile, byte[] splitKey) throws IOException {
+final byte[][] viewConstants, final HRegionInfo childRegionInfo,
+byte[] regionStartKeyInHFile, byte[] splitKey, HRegionInfo 
currentRegion)
+throws IOException {
 super(fs, p, in, size, cacheConf, conf);
 this.splitkey = splitKey == null ? r.getSplitKey() : splitKey;
 // Is it top or bottom half?
@@ -87,9 +93,10 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
 this.splitRow = 
CellUtil.cloneRow(KeyValue.createKeyValueFromKey(splitkey));
 this.indexMaintainers = indexMaintainers;
 this.viewConstants = viewConstants;
-this.regionInfo = regionInfo;
+this.childRegionInfo = childRegionInfo;
 this.regionStartKeyInHFile = regionStartKeyInHFile;
 this.offset = regionStartKeyInHFile.length;
+this.currentRegion = currentRegion;
 }
 
 public int getOffset() {
@@ -105,7 +112,7 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
 }
 
 public HRegionInfo getRegionInfo() {
-return regionInfo;
+return childRegionInfo;
 }
 
 public byte[] getRegionStartKeyInHFile() {
@@ -126,8 +133,29 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
 
 @Override
 public StoreFileScanner getStoreFileScanner(boolean cacheBlocks, boolean 
pread, boolean isCompaction, long readPt,
-long scannerOrder, boolean canOptimizeForNonNullColumn) {
+  

[1/2] phoenix git commit: PHOENIX-4839 IndexHalfStoreFileReaderGenerator throws NullPointerException(Aman Poonia)

2018-09-27 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/4.14-HBase-1.4 bbf95b76a -> d101141fa


PHOENIX-4839 IndexHalfStoreFileReaderGenerator throws NullPointerException(Aman 
Poonia)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/7dc6e890
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/7dc6e890
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/7dc6e890

Branch: refs/heads/4.14-HBase-1.4
Commit: 7dc6e8901cb4b77ec30ac2668869a1f6837bf9ca
Parents: bbf95b7
Author: Ankit Singhal 
Authored: Tue Aug 21 11:52:29 2018 -0700
Committer: Vincent Poon 
Committed: Thu Sep 27 14:00:10 2018 -0700

--
 .../regionserver/IndexHalfStoreFileReader.java  |   7 +
 .../IndexHalfStoreFileReaderGenerator.java  | 133 ++-
 2 files changed, 17 insertions(+), 123 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/7dc6e890/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
index d1d12fb..1f3113c 100644
--- 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
+++ 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReader.java
@@ -123,4 +123,11 @@ public class IndexHalfStoreFileReader extends 
StoreFile.Reader {
 public boolean isTop() {
 return top;
 }
+
+@Override
+public StoreFileScanner getStoreFileScanner(boolean cacheBlocks, boolean 
pread, boolean isCompaction, long readPt,
+long scannerOrder, boolean canOptimizeForNonNullColumn) {
+return new LocalIndexStoreFileScanner(this, getScanner(cacheBlocks, 
pread, isCompaction), true,
+getHFileReader().hasMVCCInfo(), readPt, scannerOrder, 
canOptimizeForNonNullColumn);
+}
 }
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/phoenix/blob/7dc6e890/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
index 74243e1..bf83147 100644
--- 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
+++ 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
@@ -17,16 +17,11 @@
  */
 package org.apache.hadoop.hbase.regionserver;
 
-import static 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.SCAN_START_ROW_SUFFIX;
-
 import java.io.IOException;
 import java.sql.SQLException;
-import java.util.ArrayList;
-import java.util.Collection;
 import java.util.HashMap;
 import java.util.List;
 import java.util.Map;
-import java.util.NavigableSet;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -153,7 +148,9 @@ public class IndexHalfStoreFileReaderGenerator extends 
BaseRegionObserver {
 try {
 conn = 
QueryUtil.getConnectionOnServer(ctx.getEnvironment().getConfiguration()).unwrap(
 PhoenixConnection.class);
-PTable dataTable = IndexUtil.getPDataTable(conn, 
ctx.getEnvironment().getRegion().getTableDesc());
+PTable dataTable =
+IndexUtil.getPDataTable(conn, 
ctx.getEnvironment().getRegion()
+.getTableDesc());
 List indexes = dataTable.getIndexes();
 Map indexMaintainers =
 new HashMap();
@@ -187,19 +184,13 @@ public class IndexHalfStoreFileReaderGenerator extends 
BaseRegionObserver {
 return reader;
 }
 
-@SuppressWarnings("deprecation")
 @Override
-public InternalScanner 
preCompactScannerOpen(ObserverContext c,
-Store store, List scanners, ScanType 
scanType,
-long earliestPutTs, InternalScanner s, CompactionRequest request) 
throws IOException {
+public InternalScanner preCompactScannerOpen(
+
org.apache.hadoop.hbase.coprocessor.ObserverContext
 c, Store store,
+java.util.List scanners, ScanType 
scanType, long earliestPutTs,
+InternalScanner s, CompactionRequest request) throws IOException {
+
 if (!IndexUtil.isLocalIndexStore(store)) { return s; }
-Scan scan = null;
-if (s!=null)

Build failed in Jenkins: Phoenix Compile Compatibility with HBase #769

2018-09-27 Thread Apache Jenkins Server
See 


--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on H24 (ubuntu xenial) in workspace 

[Phoenix_Compile_Compat_wHBase] $ /bin/bash /tmp/jenkins5951528800327597729.sh
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 386416
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 6
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 10240
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited
core id : 0
core id : 1
core id : 2
core id : 3
core id : 4
core id : 5
physical id : 0
physical id : 1
MemTotal:   98958120 kB
MemFree:26254844 kB
Filesystem  Size  Used Avail Use% Mounted on
udev 48G 0   48G   0% /dev
tmpfs   9.5G  146M  9.3G   2% /run
/dev/sda1   364G  294G   52G  86% /
tmpfs48G 0   48G   0% /dev/shm
tmpfs   5.0M 0  5.0M   0% /run/lock
tmpfs48G 0   48G   0% /sys/fs/cgroup
/dev/loop1   32M   32M 0 100% /snap/snapcraft/1594
/dev/loop3   87M   87M 0 100% /snap/core/5145
tmpfs   1.0M 0  1.0M   0% /var/snap/lxd/common/ns
/dev/loop5   88M   88M 0 100% /snap/core/5328
/dev/loop6   28M   28M 0 100% /snap/snapcraft/1803
tmpfs   9.5G 0  9.5G   0% /run/user/910
/dev/loop4   28M   28M 0 100% /snap/snapcraft/1871
/dev/loop9   64M   64M 0 100% /snap/lxd/8721
/dev/loop10  64M   64M 0 100% /snap/lxd/8774
/dev/loop11  64M   64M 0 100% /snap/lxd/8848
apache-maven-2.2.1
apache-maven-3.0.4
apache-maven-3.0.5
apache-maven-3.1.1
apache-maven-3.2.1
apache-maven-3.2.5
apache-maven-3.3.3
apache-maven-3.3.9
apache-maven-3.5.0
apache-maven-3.5.2
apache-maven-3.5.4
latest
latest2
latest3


===
Verifying compile level compatibility with HBase 0.98 with Phoenix 
4.x-HBase-0.98
===

Cloning into 'hbase'...
Switched to a new branch '0.98'
Branch 0.98 set up to track remote branch 0.98 from origin.
[ERROR] Plugin org.codehaus.mojo:findbugs-maven-plugin:2.5.2 or one of its 
dependencies could not be resolved: Failed to read artifact descriptor for 
org.codehaus.mojo:findbugs-maven-plugin:jar:2.5.2: Could not transfer artifact 
org.codehaus.mojo:findbugs-maven-plugin:pom:2.5.2 from/to central 
(https://repo.maven.apache.org/maven2): Received fatal alert: protocol_version 
-> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginResolutionException
Build step 'Execute shell' marked build as failure