Jenkins build is back to normal : Phoenix-4.x-HBase-1.3 #214

2018-09-28 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Phoenix Compile Compatibility with HBase #770

2018-09-28 Thread Apache Jenkins Server
See 


--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

[Phoenix_Compile_Compat_wHBase] $ /bin/bash /tmp/jenkins3047926430651314366.sh
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 64055
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 6
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 10240
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited
core id : 0
core id : 1
core id : 2
core id : 3
physical id : 0
MemTotal:   16432472 kB
MemFree: 4412068 kB
Filesystem  Size  Used Avail Use% Mounted on
udev7.9G   12K  7.9G   1% /dev
tmpfs   1.6G  159M  1.5G  10% /run
/dev/vda1   394G  286G   93G  76% /
none4.0K 0  4.0K   0% /sys/fs/cgroup
none5.0M 0  5.0M   0% /run/lock
none7.9G  752K  7.9G   1% /run/shm
none100M 0  100M   0% /run/user
apache-maven-2.2.1
apache-maven-3.0.4
apache-maven-3.0.5
apache-maven-3.1.1
apache-maven-3.2.1
apache-maven-3.2.5
apache-maven-3.3.3
apache-maven-3.3.9
apache-maven-3.5.0
apache-maven-3.5.2
apache-maven-3.5.4
latest
latest2
latest3


===
Verifying compile level compatibility with HBase 0.98 with Phoenix 
4.x-HBase-0.98
===

Cloning into 'hbase'...
Switched to a new branch '0.98'
Branch 0.98 set up to track remote branch 0.98 from origin.
[ERROR] Plugin org.codehaus.mojo:findbugs-maven-plugin:2.5.2 or one of its 
dependencies could not be resolved: Failed to read artifact descriptor for 
org.codehaus.mojo:findbugs-maven-plugin:jar:2.5.2: Could not transfer artifact 
org.codehaus.mojo:findbugs-maven-plugin:pom:2.5.2 from/to central 
(https://repo.maven.apache.org/maven2): Received fatal alert: protocol_version 
-> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginResolutionException
Build step 'Execute shell' marked build as failure


phoenix git commit: Upgrade to Tephra 0.15 and use Table instead of HTableInterface

2018-09-28 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/omid2 790064b54 -> 862929248


Upgrade to Tephra 0.15 and use Table instead of HTableInterface


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/86292924
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/86292924
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/86292924

Branch: refs/heads/omid2
Commit: 8629292483ac24b53539370654d2541c0d504486
Parents: 790064b
Author: James Taylor 
Authored: Fri Sep 28 09:52:52 2018 -0700
Committer: James Taylor 
Committed: Fri Sep 28 09:52:52 2018 -0700

--
 .../phoenix/tx/FlappingTransactionIT.java   |   3 +-
 .../apache/phoenix/execute/DelegateHTable.java  |  57 +---
 .../apache/phoenix/execute/MutationState.java   |  12 +-
 .../PhoenixTxIndexMutationGenerator.java|   3 +-
 .../phoenix/iterate/TableResultIterator.java|   6 +-
 .../transaction/OmidTransactionContext.java |   8 +-
 .../transaction/OmidTransactionTable.java   | 261 ---
 .../transaction/PhoenixTransactionContext.java  |  10 +-
 .../transaction/TephraTransactionContext.java   |   7 +-
 .../java/org/apache/phoenix/util/TestUtil.java  |   3 +-
 pom.xml |   2 +-
 11 files changed, 79 insertions(+), 293 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/86292924/phoenix-core/src/it/java/org/apache/phoenix/tx/FlappingTransactionIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/tx/FlappingTransactionIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/tx/FlappingTransactionIT.java
index 3c164ea..5ba8dd9 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/tx/FlappingTransactionIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/tx/FlappingTransactionIT.java
@@ -38,6 +38,7 @@ import org.apache.hadoop.hbase.client.Get;
 import org.apache.hadoop.hbase.client.HTableInterface;
 import org.apache.hadoop.hbase.client.Put;
 import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Table;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.end2end.ParallelStatsDisabledIT;
 import org.apache.phoenix.exception.SQLExceptionCode;
@@ -252,7 +253,7 @@ public class FlappingTransactionIT extends 
ParallelStatsDisabledIT {
 // Either set txn on all existing OmidTransactionTable or throw 
exception
 // when attempting to get OmidTransactionTable if a txn is not in 
progress.
 txContext.begin(); 
-HTableInterface txTable = txContext.getTransactionalTable(htable, 
false);
+Table txTable = txContext.getTransactionalTable(htable, false);
 
 // Use HBase APIs to add a new row
 Put put = new Put(Bytes.toBytes("z"));

http://git-wip-us.apache.org/repos/asf/phoenix/blob/86292924/phoenix-core/src/main/java/org/apache/phoenix/execute/DelegateHTable.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/execute/DelegateHTable.java 
b/phoenix-core/src/main/java/org/apache/phoenix/execute/DelegateHTable.java
index f45b356..0618945 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/execute/DelegateHTable.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/execute/DelegateHTable.java
@@ -28,7 +28,6 @@ import org.apache.hadoop.hbase.client.Append;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.Durability;
 import org.apache.hadoop.hbase.client.Get;
-import org.apache.hadoop.hbase.client.HTableInterface;
 import org.apache.hadoop.hbase.client.Increment;
 import org.apache.hadoop.hbase.client.Put;
 import org.apache.hadoop.hbase.client.Result;
@@ -36,6 +35,7 @@ import org.apache.hadoop.hbase.client.ResultScanner;
 import org.apache.hadoop.hbase.client.Row;
 import org.apache.hadoop.hbase.client.RowMutations;
 import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
 import org.apache.hadoop.hbase.client.coprocessor.Batch.Call;
 import org.apache.hadoop.hbase.client.coprocessor.Batch.Callback;
 import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
@@ -46,19 +46,14 @@ import com.google.protobuf.Message;
 import com.google.protobuf.Service;
 import com.google.protobuf.ServiceException;
 
-public class DelegateHTable implements HTableInterface {
-protected final HTableInterface delegate;
+public class DelegateHTable implements Table {
+protected final Table delegate;
 
-public DelegateHTable(HTableInterface delegate) {
+public DelegateHTable(Table delegate) {
 this.delegate = delegate;
 }
 
 @Override
-public byte[] getTableName() {
-return delegate.getTableName();
-}

Build failed in Jenkins: Phoenix-omid2 #92

2018-09-28 Thread Apache Jenkins Server
See 

--
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H31 (ubuntu xenial) in workspace 

Also:   hudson.remoting.Channel$CallSiteStackTrace: Remote call to H31
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741)
at 
hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:955)
at hudson.FilePath.act(FilePath.java:1036)
at hudson.FilePath.act(FilePath.java:1025)
at hudson.FilePath.mkdirs(FilePath.java:1213)
at 
hudson.model.AbstractProject.checkout(AbstractProject.java:1202)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at 
jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 
hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
java.nio.file.FileSystemException: 
: No space left on device
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
at java.nio.file.Files.createDirectory(Files.java:674)
at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781)
at java.nio.file.Files.createDirectories(Files.java:767)
at hudson.FilePath.mkdirs(FilePath.java:3103)
at hudson.FilePath.access$900(FilePath.java:209)
at hudson.FilePath$Mkdirs.invoke(FilePath.java:1221)
at hudson.FilePath$Mkdirs.invoke(FilePath.java:1217)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2918)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused: java.io.IOException: remote file operation failed: 
 at 
hudson.remoting.Channel@3bc2a34f:H31
at hudson.FilePath.act(FilePath.java:1043)
at hudson.FilePath.act(FilePath.java:1025)
at hudson.FilePath.mkdirs(FilePath.java:1213)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1202)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Retrying after 10 seconds
Also:   hudson.remoting.Channel$CallSiteStackTrace: Remote call to H31
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741)
at 
hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:955)
at hudson.FilePath.act(FilePath.java:1036)
at hudson.FilePath.act(FilePath.java:1025)
at hudson.FilePath.mkdirs(FilePath.java:1213)
at 
hudson.model.AbstractProject.checkout(AbstractProject.java:1202)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at 
jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
 

Build failed in Jenkins: Phoenix-omid2 #93

2018-09-28 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H31 (ubuntu xenial) in workspace 

Also:   hudson.remoting.Channel$CallSiteStackTrace: Remote call to H31
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741)
at 
hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:955)
at hudson.FilePath.act(FilePath.java:1036)
at hudson.FilePath.act(FilePath.java:1025)
at hudson.FilePath.mkdirs(FilePath.java:1213)
at 
hudson.model.AbstractProject.checkout(AbstractProject.java:1202)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at 
jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 
hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
java.nio.file.FileSystemException: 
: No space left on device
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:91)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
at java.nio.file.Files.createDirectory(Files.java:674)
at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781)
at java.nio.file.Files.createDirectories(Files.java:767)
at hudson.FilePath.mkdirs(FilePath.java:3103)
at hudson.FilePath.access$900(FilePath.java:209)
at hudson.FilePath$Mkdirs.invoke(FilePath.java:1221)
at hudson.FilePath$Mkdirs.invoke(FilePath.java:1217)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2918)
at hudson.remoting.UserRequest.perform(UserRequest.java:212)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:369)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused: java.io.IOException: remote file operation failed: 
 at 
hudson.remoting.Channel@3bc2a34f:H31
at hudson.FilePath.act(FilePath.java:1043)
at hudson.FilePath.act(FilePath.java:1025)
at hudson.FilePath.mkdirs(FilePath.java:1213)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1202)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Retrying after 10 seconds
Also:   hudson.remoting.Channel$CallSiteStackTrace: Remote call to H31
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1741)
at 
hudson.remoting.UserRequest$ExceptionResponse.retrieve(UserRequest.java:357)
at hudson.remoting.Channel.call(Channel.java:955)
at hudson.FilePath.act(FilePath.java:1036)
at hudson.FilePath.act(FilePath.java:1025)
at hudson.FilePath.mkdirs(FilePath.java:1213)
at 
hudson.model.AbstractProject.checkout(AbstractProject.java:1202)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at 
jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 
hudson

Build failed in Jenkins: Phoenix-omid2 #94

2018-09-28 Thread Apache Jenkins Server
See 


Changes:

[jamestaylor] Upgrade to Tephra 0.15 and use Table instead of HTableInterface

--
[...truncated 6.63 KB...]
[INFO] Unable to find resource 
'org.codehaus.jackson:jackson-mapper-asl:pom:1.8.3' in repository apache 
release (https://repository.apache.org/content/repositories/releases/)
Downloading: 
http://download.java.net/maven/2//org/codehaus/jackson/jackson-mapper-asl/1.8.3/jackson-mapper-asl-1.8.3.pom
0b downloaded  (jackson-mapper-asl-1.8.3.pom)
[WARNING] *** CHECKSUM FAILED - Invalid checksum file - RETRYING
Downloading: 
http://download.java.net/maven/2//org/codehaus/jackson/jackson-mapper-asl/1.8.3/jackson-mapper-asl-1.8.3.pom
0b downloaded  (jackson-mapper-asl-1.8.3.pom)
[WARNING] *** CHECKSUM FAILED - Invalid checksum file - IGNORING
[INFO] Unable to find resource 
'org.codehaus.jackson:jackson-mapper-asl:pom:1.8.3' in repository java.net 
(http://download.java.net/maven/2/)
Downloading: 
http://repository.jboss.org/nexus/content/groups/public-jboss//org/codehaus/jackson/jackson-mapper-asl/1.8.3/jackson-mapper-asl-1.8.3.pom
1/1K1K downloaded  (jackson-mapper-asl-1.8.3.pom)
Downloading: 
https://repository.apache.org/content/repositories/releases//org/apache/hadoop/hadoop-auth/2.5.1/hadoop-auth-2.5.1.pom
4/6K6/6K6K downloaded  (hadoop-auth-2.5.1.pom)
Downloading: 
https://repository.apache.org/content/repositories/releases//org/apache/httpcomponents/httpclient/4.2.5/httpclient-4.2.5.pom
4/5K5/5K5K downloaded  (httpclient-4.2.5.pom)
Downloading: 
https://repository.apache.org/content/repositories/releases//org/apache/httpcomponents/httpcomponents-client/4.2.5/httpcomponents-client-4.2.5.pom
4/14K7/14K8/14K12/14K14/14K14K downloaded  (httpcomponents-client-4.2.5.pom)
Downloading: 
https://repository.apache.org/content/repositories/releases//org/apache/hadoop/hadoop-mapreduce-client-core/2.5.1/hadoop-mapreduce-client-core-2.5.1.pom
3/3K3K downloaded  (hadoop-mapreduce-client-core-2.5.1.pom)
Downloading: 
https://repository.apache.org/content/repositories/releases//org/apache/hadoop/hadoop-mapreduce-client/2.5.1/hadoop-mapreduce-client-2.5.1.pom
4/6K6/6K6K downloaded  (hadoop-mapreduce-client-2.5.1.pom)
Downloading: 
https://repository.apache.org/content/repositories/releases//org/apache/hadoop/hadoop-yarn-common/2.5.1/hadoop-yarn-common-2.5.1.pom
4/9K7/9K8/9K9/9K9K downloaded  (hadoop-yarn-common-2.5.1.pom)
Downloading: 
https://repository.apache.org/content/repositories/releases//org/apache/hadoop/hadoop-yarn/2.5.1/hadoop-yarn-2.5.1.pom
3/3K3K downloaded  (hadoop-yarn-2.5.1.pom)
Downloading: 
https://repository.apache.org/content/repositories/releases//org/apache/hadoop/hadoop-yarn-api/2.5.1/hadoop-yarn-api-2.5.1.pom
4/4K4/4K4K downloaded  (hadoop-yarn-api-2.5.1.pom)
Downloading: 
https://repository.apache.org/content/repositories/releases//org/codehaus/jackson/jackson-jaxrs/1.9.13/jackson-jaxrs-1.9.13.pom
[INFO] Unable to find resource 'org.codehaus.jackson:jackson-jaxrs:pom:1.9.13' 
in repository apache release 
(https://repository.apache.org/content/repositories/releases/)
Downloading: 
http://download.java.net/maven/2//org/codehaus/jackson/jackson-jaxrs/1.9.13/jackson-jaxrs-1.9.13.pom
0b downloaded  (jackson-jaxrs-1.9.13.pom)
[WARNING] *** CHECKSUM FAILED - Invalid checksum file - RETRYING
Downloading: 
http://download.java.net/maven/2//org/codehaus/jackson/jackson-jaxrs/1.9.13/jackson-jaxrs-1.9.13.pom
0b downloaded  (jackson-jaxrs-1.9.13.pom)
[WARNING] *** CHECKSUM FAILED - Invalid checksum file - IGNORING
[INFO] Unable to find resource 'org.codehaus.jackson:jackson-jaxrs:pom:1.9.13' 
in repository java.net (http://download.java.net/maven/2/)
Downloading: 
http://repository.jboss.org/nexus/content/groups/public-jboss//org/codehaus/jackson/jackson-jaxrs/1.9.13/jackson-jaxrs-1.9.13.pom
1/1K1K downloaded  (jackson-jaxrs-1.9.13.pom)
Downloading: 
https://repository.apache.org/content/repositories/releases//org/apache/httpcomponents/httpcore/4.2.4/httpcore-4.2.4.pom
4/5K5/5K5K downloaded  (httpcore-4.2.4.pom)
Downloading: 
https://repository.apache.org/content/repositories/releases//org/apache/httpcomponents/httpcomponents-core/4.2.4/httpcomponents-core-4.2.4.pom
4/11K7/11K8/11K11/11K11K downloaded  (httpcomponents-core-4.2.4.pom)
Downloading: 
http://repo1.maven.org/maven2/org/apache/hbase/hbase-client/1.2.5/hbase-client-1.2.5.jar
Downloading: 
https://repository.apache.org/content/repositories/snapshots/org/apache/httpcomponents/httpclient/4.2.5/httpclient-4.2.5.jar
Downloading: 
http://repo1.maven.org/maven2/org/apache/hadoop/hadoop-common/2.5.1/hadoop-common-2.5.1.jar
Downloading: 
http://repo1.maven.org/maven2/org/jruby/jcodings/jcodings/1.0.8/jcodings-1.0.8.jar
4/2893K8/2893K12/2893K13/2893K17/2893K21/2893K25/2893K29/2893K33/2893K37/2893K41/2893K41/2893K
 4/1260K45/2893K 4/1260K49/2893K 4/1260K53/2893K 4/1260K57/2893K 
4/1260K61/2893K 4/1260K65/2893K 4/1260K69/2

phoenix git commit: After HBASE-20940 any local index query will open all HFiles of every Region involved in the query.

2018-09-28 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/4.14-HBase-1.4 256d142f2 -> 4af45a056


After HBASE-20940 any local index query will open all HFiles of every Region 
involved in the query.


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/4af45a05
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/4af45a05
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/4af45a05

Branch: refs/heads/4.14-HBase-1.4
Commit: 4af45a056a60259c6cb09cc56bf9a4f745063651
Parents: 256d142
Author: Lars Hofhansl 
Authored: Sun Sep 23 22:34:34 2018 -0700
Committer: Vincent Poon 
Committed: Fri Sep 28 14:53:28 2018 -0700

--
 .../phoenix/iterate/RegionScannerFactory.java| 19 +--
 1 file changed, 1 insertion(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/4af45a05/phoenix-core/src/main/java/org/apache/phoenix/iterate/RegionScannerFactory.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/RegionScannerFactory.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/RegionScannerFactory.java
index aed5805..d81224d 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/RegionScannerFactory.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/RegionScannerFactory.java
@@ -37,7 +37,6 @@ import 
org.apache.phoenix.hbase.index.covered.update.ColumnReference;
 import org.apache.phoenix.index.IndexMaintainer;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.schema.KeyValueSchema;
-import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.ValueBitSet;
 import org.apache.phoenix.schema.tuple.*;
 import org.apache.phoenix.transaction.PhoenixTransactionContext;
@@ -45,7 +44,6 @@ import org.apache.phoenix.util.EncodedColumnsUtil;
 import org.apache.phoenix.util.IndexUtil;
 import org.apache.phoenix.util.ScanUtil;
 import org.apache.phoenix.util.ServerUtil;
-import org.apache.tephra.Transaction;
 
 import java.io.IOException;
 import java.util.List;
@@ -103,25 +101,10 @@ public abstract class RegionScannerFactory {
   final ImmutableBytesWritable ptr, final boolean useQualifierAsListIndex) 
{
 return new RegionScanner() {
 
-  private boolean hasReferences = checkForReferenceFiles();
   private HRegionInfo regionInfo = env.getRegionInfo();
   private byte[] actualStartKey = getActualStartKey();
   private boolean useNewValueColumnQualifier = 
EncodedColumnsUtil.useNewValueColumnQualifier(scan);
 
-  // If there are any reference files after local index region merge some 
cases we might
-  // get the records less than scan start row key. This will happen when 
we replace the
-  // actual region start key with merge region start key. This method 
gives whether are
-  // there any reference files in the region or not.
-  private boolean checkForReferenceFiles() {
-if(!ScanUtil.isLocalIndex(scan)) return false;
-for (byte[] family : scan.getFamilies()) {
-  if (getRegion().getStore(family).hasReferences()) {
-return true;
-  }
-}
-return false;
-  }
-
   // Get the actual scan start row of local index. This will be used to 
compare the row
   // key of the results less than scan start row when there are references.
   public byte[] getActualStartKey() {
@@ -182,7 +165,7 @@ public abstract class RegionScannerFactory {
 arrayElementCell = result.get(arrayElementCellPosition);
   }
   if (ScanUtil.isLocalIndex(scan) && !ScanUtil.isAnalyzeTable(scan)) {
-if(hasReferences && actualStartKey!=null) {
+if(actualStartKey!=null) {
   next = scanTillScanStartRow(s, arrayKVRefs, arrayFuncRefs, 
result,
   null, arrayElementCell);
   if (result.isEmpty()) {



phoenix git commit: After HBASE-20940 any local index query will open all HFiles of every Region involved in the query.

2018-09-28 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/4.14-HBase-1.3 7e0268591 -> 8a3deb7b5


After HBASE-20940 any local index query will open all HFiles of every Region 
involved in the query.


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/8a3deb7b
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/8a3deb7b
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/8a3deb7b

Branch: refs/heads/4.14-HBase-1.3
Commit: 8a3deb7b566f015c4aa0beadce81abb93963c4e3
Parents: 7e02685
Author: Lars Hofhansl 
Authored: Sun Sep 23 22:35:07 2018 -0700
Committer: Vincent Poon 
Committed: Fri Sep 28 14:56:23 2018 -0700

--
 .../phoenix/iterate/RegionScannerFactory.java| 19 +--
 1 file changed, 1 insertion(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/8a3deb7b/phoenix-core/src/main/java/org/apache/phoenix/iterate/RegionScannerFactory.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/RegionScannerFactory.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/RegionScannerFactory.java
index aed5805..d81224d 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/RegionScannerFactory.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/RegionScannerFactory.java
@@ -37,7 +37,6 @@ import 
org.apache.phoenix.hbase.index.covered.update.ColumnReference;
 import org.apache.phoenix.index.IndexMaintainer;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.schema.KeyValueSchema;
-import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.ValueBitSet;
 import org.apache.phoenix.schema.tuple.*;
 import org.apache.phoenix.transaction.PhoenixTransactionContext;
@@ -45,7 +44,6 @@ import org.apache.phoenix.util.EncodedColumnsUtil;
 import org.apache.phoenix.util.IndexUtil;
 import org.apache.phoenix.util.ScanUtil;
 import org.apache.phoenix.util.ServerUtil;
-import org.apache.tephra.Transaction;
 
 import java.io.IOException;
 import java.util.List;
@@ -103,25 +101,10 @@ public abstract class RegionScannerFactory {
   final ImmutableBytesWritable ptr, final boolean useQualifierAsListIndex) 
{
 return new RegionScanner() {
 
-  private boolean hasReferences = checkForReferenceFiles();
   private HRegionInfo regionInfo = env.getRegionInfo();
   private byte[] actualStartKey = getActualStartKey();
   private boolean useNewValueColumnQualifier = 
EncodedColumnsUtil.useNewValueColumnQualifier(scan);
 
-  // If there are any reference files after local index region merge some 
cases we might
-  // get the records less than scan start row key. This will happen when 
we replace the
-  // actual region start key with merge region start key. This method 
gives whether are
-  // there any reference files in the region or not.
-  private boolean checkForReferenceFiles() {
-if(!ScanUtil.isLocalIndex(scan)) return false;
-for (byte[] family : scan.getFamilies()) {
-  if (getRegion().getStore(family).hasReferences()) {
-return true;
-  }
-}
-return false;
-  }
-
   // Get the actual scan start row of local index. This will be used to 
compare the row
   // key of the results less than scan start row when there are references.
   public byte[] getActualStartKey() {
@@ -182,7 +165,7 @@ public abstract class RegionScannerFactory {
 arrayElementCell = result.get(arrayElementCellPosition);
   }
   if (ScanUtil.isLocalIndex(scan) && !ScanUtil.isAnalyzeTable(scan)) {
-if(hasReferences && actualStartKey!=null) {
+if(actualStartKey!=null) {
   next = scanTillScanStartRow(s, arrayKVRefs, arrayFuncRefs, 
result,
   null, arrayElementCell);
   if (result.isEmpty()) {



phoenix git commit: After HBASE-20940 any local index query will open all HFiles of every Region involved in the query.

2018-09-28 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/4.14-HBase-1.2 16613da6f -> 022a2e86d


After HBASE-20940 any local index query will open all HFiles of every Region 
involved in the query.


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/022a2e86
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/022a2e86
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/022a2e86

Branch: refs/heads/4.14-HBase-1.2
Commit: 022a2e86d4cc25c8a62e0468551701adfb947dc8
Parents: 16613da
Author: Lars Hofhansl 
Authored: Sun Sep 23 22:35:38 2018 -0700
Committer: Vincent Poon 
Committed: Fri Sep 28 14:57:29 2018 -0700

--
 .../phoenix/iterate/RegionScannerFactory.java| 19 +--
 1 file changed, 1 insertion(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/022a2e86/phoenix-core/src/main/java/org/apache/phoenix/iterate/RegionScannerFactory.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/RegionScannerFactory.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/RegionScannerFactory.java
index aed5805..d81224d 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/RegionScannerFactory.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/RegionScannerFactory.java
@@ -37,7 +37,6 @@ import 
org.apache.phoenix.hbase.index.covered.update.ColumnReference;
 import org.apache.phoenix.index.IndexMaintainer;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.schema.KeyValueSchema;
-import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.ValueBitSet;
 import org.apache.phoenix.schema.tuple.*;
 import org.apache.phoenix.transaction.PhoenixTransactionContext;
@@ -45,7 +44,6 @@ import org.apache.phoenix.util.EncodedColumnsUtil;
 import org.apache.phoenix.util.IndexUtil;
 import org.apache.phoenix.util.ScanUtil;
 import org.apache.phoenix.util.ServerUtil;
-import org.apache.tephra.Transaction;
 
 import java.io.IOException;
 import java.util.List;
@@ -103,25 +101,10 @@ public abstract class RegionScannerFactory {
   final ImmutableBytesWritable ptr, final boolean useQualifierAsListIndex) 
{
 return new RegionScanner() {
 
-  private boolean hasReferences = checkForReferenceFiles();
   private HRegionInfo regionInfo = env.getRegionInfo();
   private byte[] actualStartKey = getActualStartKey();
   private boolean useNewValueColumnQualifier = 
EncodedColumnsUtil.useNewValueColumnQualifier(scan);
 
-  // If there are any reference files after local index region merge some 
cases we might
-  // get the records less than scan start row key. This will happen when 
we replace the
-  // actual region start key with merge region start key. This method 
gives whether are
-  // there any reference files in the region or not.
-  private boolean checkForReferenceFiles() {
-if(!ScanUtil.isLocalIndex(scan)) return false;
-for (byte[] family : scan.getFamilies()) {
-  if (getRegion().getStore(family).hasReferences()) {
-return true;
-  }
-}
-return false;
-  }
-
   // Get the actual scan start row of local index. This will be used to 
compare the row
   // key of the results less than scan start row when there are references.
   public byte[] getActualStartKey() {
@@ -182,7 +165,7 @@ public abstract class RegionScannerFactory {
 arrayElementCell = result.get(arrayElementCellPosition);
   }
   if (ScanUtil.isLocalIndex(scan) && !ScanUtil.isAnalyzeTable(scan)) {
-if(hasReferences && actualStartKey!=null) {
+if(actualStartKey!=null) {
   next = scanTillScanStartRow(s, arrayKVRefs, arrayFuncRefs, 
result,
   null, arrayElementCell);
   if (result.isEmpty()) {



phoenix git commit: After HBASE-20940 any local index query will open all HFiles of every Region involved in the query.

2018-09-28 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/4.14-HBase-1.1 ecce879eb -> 785d3b9e3


After HBASE-20940 any local index query will open all HFiles of every Region 
involved in the query.


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/785d3b9e
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/785d3b9e
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/785d3b9e

Branch: refs/heads/4.14-HBase-1.1
Commit: 785d3b9e3b83f6d087c2a5144ecd168785f85532
Parents: ecce879
Author: Lars Hofhansl 
Authored: Sun Sep 23 22:35:38 2018 -0700
Committer: Vincent Poon 
Committed: Fri Sep 28 14:58:14 2018 -0700

--
 .../phoenix/iterate/RegionScannerFactory.java| 19 +--
 1 file changed, 1 insertion(+), 18 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/785d3b9e/phoenix-core/src/main/java/org/apache/phoenix/iterate/RegionScannerFactory.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/RegionScannerFactory.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/RegionScannerFactory.java
index aed5805..d81224d 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/RegionScannerFactory.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/RegionScannerFactory.java
@@ -37,7 +37,6 @@ import 
org.apache.phoenix.hbase.index.covered.update.ColumnReference;
 import org.apache.phoenix.index.IndexMaintainer;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.schema.KeyValueSchema;
-import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.ValueBitSet;
 import org.apache.phoenix.schema.tuple.*;
 import org.apache.phoenix.transaction.PhoenixTransactionContext;
@@ -45,7 +44,6 @@ import org.apache.phoenix.util.EncodedColumnsUtil;
 import org.apache.phoenix.util.IndexUtil;
 import org.apache.phoenix.util.ScanUtil;
 import org.apache.phoenix.util.ServerUtil;
-import org.apache.tephra.Transaction;
 
 import java.io.IOException;
 import java.util.List;
@@ -103,25 +101,10 @@ public abstract class RegionScannerFactory {
   final ImmutableBytesWritable ptr, final boolean useQualifierAsListIndex) 
{
 return new RegionScanner() {
 
-  private boolean hasReferences = checkForReferenceFiles();
   private HRegionInfo regionInfo = env.getRegionInfo();
   private byte[] actualStartKey = getActualStartKey();
   private boolean useNewValueColumnQualifier = 
EncodedColumnsUtil.useNewValueColumnQualifier(scan);
 
-  // If there are any reference files after local index region merge some 
cases we might
-  // get the records less than scan start row key. This will happen when 
we replace the
-  // actual region start key with merge region start key. This method 
gives whether are
-  // there any reference files in the region or not.
-  private boolean checkForReferenceFiles() {
-if(!ScanUtil.isLocalIndex(scan)) return false;
-for (byte[] family : scan.getFamilies()) {
-  if (getRegion().getStore(family).hasReferences()) {
-return true;
-  }
-}
-return false;
-  }
-
   // Get the actual scan start row of local index. This will be used to 
compare the row
   // key of the results less than scan start row when there are references.
   public byte[] getActualStartKey() {
@@ -182,7 +165,7 @@ public abstract class RegionScannerFactory {
 arrayElementCell = result.get(arrayElementCellPosition);
   }
   if (ScanUtil.isLocalIndex(scan) && !ScanUtil.isAnalyzeTable(scan)) {
-if(hasReferences && actualStartKey!=null) {
+if(actualStartKey!=null) {
   next = scanTillScanStartRow(s, arrayKVRefs, arrayFuncRefs, 
result,
   null, arrayElementCell);
   if (result.isEmpty()) {



[phoenix] Git Push Summary

2018-09-28 Thread vincentpoon
Repository: phoenix
Updated Tags:  refs/tags/v4.14.1-HBase-0.98-rc0 [deleted] c29b682b3


[phoenix] Git Push Summary

2018-09-28 Thread vincentpoon
Repository: phoenix
Updated Tags:  refs/tags/v4.14.1-HBase-1.3-rc0 [deleted] 3e99cee2c


[phoenix] Git Push Summary

2018-09-28 Thread vincentpoon
Repository: phoenix
Updated Tags:  refs/tags/v4.14.1-HBase-1.1-rc0 [deleted] 4ed3e1459


[phoenix] Git Push Summary

2018-09-28 Thread vincentpoon
Repository: phoenix
Updated Tags:  refs/tags/v4.14.1-HBase-1.4-rc0 [deleted] 393d699cd


[phoenix] Git Push Summary

2018-09-28 Thread vincentpoon
Repository: phoenix
Updated Tags:  refs/tags/v4.14.1-HBase-1.2-rc0 [deleted] 3386e3174


phoenix git commit: PHOENIX-4930 Add test for ORDER BY and LIMIT queries during a split (addendum)

2018-09-28 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/master c7eeda03b -> 1aba55f42


PHOENIX-4930 Add test for ORDER BY and LIMIT queries during a split (addendum)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/1aba55f4
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/1aba55f4
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/1aba55f4

Branch: refs/heads/master
Commit: 1aba55f42510f057c876b17db0331be5b8e35e4e
Parents: c7eeda0
Author: Thomas D'Silva 
Authored: Fri Sep 28 17:32:22 2018 -0700
Committer: Thomas D'Silva 
Committed: Fri Sep 28 17:35:26 2018 -0700

--
 .../org/apache/phoenix/end2end/SplitIT.java | 36 +++-
 .../end2end/UpsertSelectAutoCommitIT.java   | 28 +--
 2 files changed, 20 insertions(+), 44 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/1aba55f4/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitIT.java
index 73cf1f0..60694ff 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitIT.java
@@ -1,7 +1,8 @@
 package org.apache.phoenix.end2end;
 
 import com.google.common.collect.Maps;
-import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
 import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.coprocessor.BaseRegionObserver;
 import org.apache.hadoop.hbase.coprocessor.ObserverContext;
@@ -56,7 +57,7 @@ public class SplitIT extends BaseUniqueNamesOwnClusterIT {
 try {
 // split on the first row being scanned if splitPoint 
is null
 splitPoint = splitPoint!=null ? splitPoint : 
results.get(0).getRow();
-splitTable(splitPoint, tableName);
+splitTable(splitPoint, TableName.valueOf(tableName));
 tableWasSplitDuringScannerNext = true;
 }
 catch (SQLException e) {
@@ -69,14 +70,14 @@ public class SplitIT extends BaseUniqueNamesOwnClusterIT {
 
 }
 
-public static void splitTable(byte[] splitPoint, String tableName) throws 
SQLException, IOException {
-HBaseAdmin admin = driver.getConnectionQueryServices(getUrl(), 
TestUtil.TEST_PROPERTIES).getAdmin();
-int nRegions = admin.getTableRegions(tableName.getBytes()).size();
+public static void splitTable(byte[] splitPoint, TableName tableName) 
throws SQLException, IOException {
+Admin admin = driver.getConnectionQueryServices(getUrl(), 
TestUtil.TEST_PROPERTIES).getAdmin();
+int nRegions = admin.getTableRegions(tableName).size();
 int nInitialRegions = nRegions;
-admin.split(tableName.getBytes(), splitPoint);
+admin.split(tableName, splitPoint);
 admin.disableTable(tableName);
 admin.enableTable(tableName);
-nRegions = admin.getTableRegions(tableName.getBytes()).size();
+nRegions = admin.getTableRegions(tableName).size();
 if (nRegions == nInitialRegions)
 throw new IOException("Could not split for " + tableName);
 }
@@ -99,7 +100,7 @@ public class SplitIT extends BaseUniqueNamesOwnClusterIT {
 for (int i=0; i<7; i++) {
 if (splitTableBeforeUpsertSelect) {
 // split the table and then run the UPSERT SELECT
-splitTable(PInteger.INSTANCE.toBytes(Math.pow(2, i)), 
tableName);
+splitTable(PInteger.INSTANCE.toBytes(Math.pow(2, i)), 
TableName.valueOf(tableName));
 }
 int upsertCount = stmt.executeUpdate();
 assertEquals((int) Math.pow(2, i), upsertCount);
@@ -124,7 +125,7 @@ public class SplitIT extends BaseUniqueNamesOwnClusterIT {
 for (int i=0; i<5; i++) {
 if (splitTableBeforeSelect) {
 // split the table and then run the SELECT
-splitTable(PInteger.INSTANCE.toBytes(Math.pow(2, i)), 
tableName);
+splitTable(PInteger.INSTANCE.toBytes(Math.pow(2, i)), 
TableName.valueOf(tableName));
 }
 
 int count = 0;
@@ -151,21 +152,8 @@ public class SplitIT extends BaseUniqueNamesOwnClusterIT {
 assertTrue(rs.next());
 int rowCount = rs.getInt(1);
 assertFalse(rs.next());
-
-// for ORDER BY a StaleRegionBoundaryException is thrown when a 
sp[it happens
-if (orderBy) {
-// if the table splits before the SELECT we alw

[phoenix] Git Push Summary

2018-09-28 Thread vincentpoon
Repository: phoenix
Updated Tags:  refs/tags/v4.14.1-HBase-1.4-rc0 [created] 9227f0d84


Build failed in Jenkins: Phoenix | Master #2141

2018-09-28 Thread Apache Jenkins Server
See 


Changes:

[tdsilva] PHOENIX-4930 Add test for ORDER BY and LIMIT queries during a split

--
[...truncated 132.94 KB...]
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.65 s 
- in org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Running org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0 s - 
in org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Running org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.464 s 
- in org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Running org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 205.527 
s - in org.apache.phoenix.end2end.join.SubqueryIT
[INFO] Running org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.646 s 
- in org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Running org.apache.phoenix.tx.TxCheckpointIT
[INFO] Running org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.585 s 
- in org.apache.phoenix.tx.TransactionIT
[INFO] Running org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.968 s 
- in org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 578.126 
s - in org.apache.phoenix.end2end.join.SortMergeJoinLocalIndexIT
[INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 258.284 
s - in org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 255.059 
s - in org.apache.phoenix.tx.TxCheckpointIT
[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Failures: 
[ERROR]   ConcurrentMutationsIT.testLockUntilMVCCAdvanced:385 Expected data 
table row count to match expected:<1> but was:<0>
[ERROR]   ConcurrentMutationsIT.testRowLockDuringPreBatchMutateWhenIndexed:329 
Expected data table row count to match expected:<1> but was:<0>
[ERROR] Errors: 
[ERROR]   UpsertSelectAutoCommitIT.testUpsertSelectDoesntSeeUpsertedData:167 » 
DoNotRetryIO
[ERROR]   
MutableIndexSplitForwardScanIT.testSplitDuringIndexScan:30->MutableIndexSplitIT.testSplitDuringIndexScan:87->MutableIndexSplitIT.splitDuringScan:152
 » StaleRegionBoundaryCache
[ERROR]   
MutableIndexSplitForwardScanIT.testSplitDuringIndexScan:30->MutableIndexSplitIT.testSplitDuringIndexScan:87->MutableIndexSplitIT.splitDuringScan:152
 » StaleRegionBoundaryCache
[ERROR]   
MutableIndexSplitReverseScanIT.testSplitDuringIndexScan:30->MutableIndexSplitIT.testSplitDuringIndexScan:87->MutableIndexSplitIT.splitDuringScan:152
 » StaleRegionBoundaryCache
[ERROR]   
MutableIndexSplitReverseScanIT.testSplitDuringIndexScan:30->MutableIndexSplitIT.testSplitDuringIndexScan:87->MutableIndexSplitIT.splitDuringScan:152
 » StaleRegionBoundaryCache
[INFO] 
[ERROR] Tests run: 3372, Failures: 2, Errors: 5, Skipped: 10
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test (HBaseManagedTimeTests) 
@ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test 
(NeedTheirOwnClusterTests) @ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.002 
s - in 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running org.apache.phoenix.end2end.ChangePermissionsIT
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.381 s 
- in org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedMutableNonTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
97.268 s - in 
org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
97.971 s - in 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[WARNING] Tes

Apache-Phoenix | origin/4.11-HBase-1.3 | Build Fixed

2018-09-28 Thread Apache Jenkins Server
origin/4.11-HBase-1.3 branch build status Fixed

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/origin/4.11-HBase-1.3

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.11/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.11/lastCompletedBuild/testReport/

Changes
No changes


Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


phoenix git commit: PHOENIX-4934 Make BaseTest.splitSystemCatalog generic

2018-09-28 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 f4ebaff0d -> 5b04764b8


PHOENIX-4934 Make BaseTest.splitSystemCatalog generic


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/5b04764b
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/5b04764b
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/5b04764b

Branch: refs/heads/4.x-HBase-1.2
Commit: 5b04764b8648cd0c916bea51c6fd80688d610bfb
Parents: f4ebaff
Author: Thomas D'Silva 
Authored: Fri Sep 28 17:56:22 2018 -0700
Committer: Thomas D'Silva 
Committed: Fri Sep 28 17:56:56 2018 -0700

--
 .../java/org/apache/phoenix/query/BaseTest.java | 73 +---
 1 file changed, 34 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/5b04764b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
--
diff --git a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
index 45b61c1..5ca247b 100644
--- a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
+++ b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
@@ -1804,36 +1804,14 @@ public abstract class BaseTest {
 }
 return false;
 }
-
-/**
- * Splits SYSTEM.CATALOG into multiple regions based on the table or view 
names passed in.
- * Metadata for each table or view is moved to a separate region,
- * @param tenantToTableAndViewMap map from tenant to tables and views 
owned by the tenant
- */
-protected static void splitSystemCatalog(Map> 
tenantToTableAndViewMap) throws Exception  {
-List splitPoints = Lists.newArrayListWithExpectedSize(5);
-// add the rows keys of the table or view metadata rows
-Set schemaNameSet=Sets.newHashSetWithExpectedSize(15);
-for (Entry> entrySet : 
tenantToTableAndViewMap.entrySet()) {
-String tenantId = entrySet.getKey();
-for (String fullName : entrySet.getValue()) {
-String schemaName = 
SchemaUtil.getSchemaNameFromFullName(fullName);
-// we don't allow SYSTEM.CATALOG to split within a schema, so 
to ensure each table
-// or view is on a separate region they need to have a unique 
tenant and schema name
-assertTrue("Schema names of tables/view must be unique ", 
schemaNameSet.add(tenantId+"."+schemaName));
-String tableName = 
SchemaUtil.getTableNameFromFullName(fullName);
-splitPoints.add(
-SchemaUtil.getTableKey(tenantId, "".equals(schemaName) ? 
null : schemaName, tableName));
-}
-}
-Collections.sort(splitPoints, Bytes.BYTES_COMPARATOR);
-
+
+protected static void splitTable(TableName fullTableName, List 
splitPoints) throws Exception {
 HBaseAdmin admin =
 driver.getConnectionQueryServices(getUrl(), 
TestUtil.TEST_PROPERTIES).getAdmin();
 assertTrue("Needs at least two split points ", splitPoints.size() > 1);
 assertTrue(
-"Number of split points should be less than or equal to the number 
of region servers ",
-splitPoints.size() <= NUM_SLAVES_BASE);
+"Number of split points should be less than or equal to the 
number of region servers ",
+splitPoints.size() <= NUM_SLAVES_BASE);
 HBaseTestingUtility util = getUtility();
 MiniHBaseCluster cluster = util.getHBaseCluster();
 HMaster master = cluster.getMaster();
@@ -1848,7 +1826,7 @@ public abstract class BaseTest {
 
availableRegionServers.push(util.getHBaseCluster().getRegionServer(i).getServerName());
 }
 List tableRegions =
-
admin.getTableRegions(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME);
+admin.getTableRegions(fullTableName);
 for (HRegionInfo hRegionInfo : tableRegions) {
 // filter on regions we are interested in
 if (regionContainsMetadataRows(hRegionInfo, splitPoints)) {
@@ -1858,15 +1836,6 @@ public abstract class BaseTest {
 }
 serverToRegionsList.get(serverName).add(hRegionInfo);
 availableRegionServers.remove(serverName);
-// Scan scan = new Scan();
-// scan.setStartRow(hRegionInfo.getStartKey());
-// scan.setStopRow(hRegionInfo.getEndKey());
-// HTable primaryTable = new 
HTable(getUtility().getConfiguration(),
-// PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME);
-// ResultScanner resultScanner = primaryTable.getScanner(scan)

phoenix git commit: PHOENIX-4934 Make BaseTest.splitSystemCatalog generic

2018-09-28 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/master 1aba55f42 -> 711089442


PHOENIX-4934 Make BaseTest.splitSystemCatalog generic


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/71108944
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/71108944
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/71108944

Branch: refs/heads/master
Commit: 7110894420db286dfa7a688dc587c559f8a9c812
Parents: 1aba55f
Author: Thomas D'Silva 
Authored: Fri Sep 28 17:47:34 2018 -0700
Committer: Thomas D'Silva 
Committed: Fri Sep 28 17:53:07 2018 -0700

--
 .../java/org/apache/phoenix/query/BaseTest.java | 73 +---
 1 file changed, 34 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/71108944/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
--
diff --git a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
index 63555a7..eea4d88 100644
--- a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
+++ b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
@@ -1803,36 +1803,14 @@ public abstract class BaseTest {
 }
 return false;
 }
-
-/**
- * Splits SYSTEM.CATALOG into multiple regions based on the table or view 
names passed in.
- * Metadata for each table or view is moved to a separate region,
- * @param tenantToTableAndViewMap map from tenant to tables and views 
owned by the tenant
- */
-protected static void splitSystemCatalog(Map> 
tenantToTableAndViewMap) throws Exception  {
-List splitPoints = Lists.newArrayListWithExpectedSize(5);
-// add the rows keys of the table or view metadata rows
-Set schemaNameSet=Sets.newHashSetWithExpectedSize(15);
-for (Entry> entrySet : 
tenantToTableAndViewMap.entrySet()) {
-String tenantId = entrySet.getKey();
-for (String fullName : entrySet.getValue()) {
-String schemaName = 
SchemaUtil.getSchemaNameFromFullName(fullName);
-// we don't allow SYSTEM.CATALOG to split within a schema, so 
to ensure each table
-// or view is on a separate region they need to have a unique 
tenant and schema name
-assertTrue("Schema names of tables/view must be unique ", 
schemaNameSet.add(tenantId+"."+schemaName));
-String tableName = 
SchemaUtil.getTableNameFromFullName(fullName);
-splitPoints.add(
-SchemaUtil.getTableKey(tenantId, "".equals(schemaName) ? 
null : schemaName, tableName));
-}
-}
-Collections.sort(splitPoints, Bytes.BYTES_COMPARATOR);
-
+
+protected static void splitTable(TableName fullTableName, List 
splitPoints) throws Exception {
 Admin admin =
 driver.getConnectionQueryServices(getUrl(), 
TestUtil.TEST_PROPERTIES).getAdmin();
 assertTrue("Needs at least two split points ", splitPoints.size() > 1);
 assertTrue(
-"Number of split points should be less than or equal to the number 
of region servers ",
-splitPoints.size() <= NUM_SLAVES_BASE);
+"Number of split points should be less than or equal to the 
number of region servers ",
+splitPoints.size() <= NUM_SLAVES_BASE);
 HBaseTestingUtility util = getUtility();
 MiniHBaseCluster cluster = util.getHBaseCluster();
 HMaster master = cluster.getMaster();
@@ -1847,7 +1825,7 @@ public abstract class BaseTest {
 
availableRegionServers.push(util.getHBaseCluster().getRegionServer(i).getServerName());
 }
 List tableRegions =
-
admin.getTableRegions(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME);
+admin.getTableRegions(fullTableName);
 for (HRegionInfo hRegionInfo : tableRegions) {
 // filter on regions we are interested in
 if (regionContainsMetadataRows(hRegionInfo, splitPoints)) {
@@ -1857,15 +1835,6 @@ public abstract class BaseTest {
 }
 serverToRegionsList.get(serverName).add(hRegionInfo);
 availableRegionServers.remove(serverName);
-// Scan scan = new Scan();
-// scan.setStartRow(hRegionInfo.getStartKey());
-// scan.setStopRow(hRegionInfo.getEndKey());
-// HTable primaryTable = new 
HTable(getUtility().getConfiguration(),
-// PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME);
-// ResultScanner resultScanner = primaryTable.getScanner(scan);
-

phoenix git commit: PHOENIX-4934 Make BaseTest.splitSystemCatalog generic

2018-09-28 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.4 bb2e77db8 -> 7e461146c


PHOENIX-4934 Make BaseTest.splitSystemCatalog generic


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/7e461146
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/7e461146
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/7e461146

Branch: refs/heads/4.x-HBase-1.4
Commit: 7e461146c3b27f628b9ca924c252d93cb57392a5
Parents: bb2e77d
Author: Thomas D'Silva 
Authored: Fri Sep 28 17:56:22 2018 -0700
Committer: Thomas D'Silva 
Committed: Fri Sep 28 17:58:28 2018 -0700

--
 .../java/org/apache/phoenix/query/BaseTest.java | 73 +---
 1 file changed, 34 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/7e461146/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
--
diff --git a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
index 0c7a73c..082f686 100644
--- a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
+++ b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
@@ -1804,36 +1804,14 @@ public abstract class BaseTest {
 }
 return false;
 }
-
-/**
- * Splits SYSTEM.CATALOG into multiple regions based on the table or view 
names passed in.
- * Metadata for each table or view is moved to a separate region,
- * @param tenantToTableAndViewMap map from tenant to tables and views 
owned by the tenant
- */
-protected static void splitSystemCatalog(Map> 
tenantToTableAndViewMap) throws Exception  {
-List splitPoints = Lists.newArrayListWithExpectedSize(5);
-// add the rows keys of the table or view metadata rows
-Set schemaNameSet=Sets.newHashSetWithExpectedSize(15);
-for (Entry> entrySet : 
tenantToTableAndViewMap.entrySet()) {
-String tenantId = entrySet.getKey();
-for (String fullName : entrySet.getValue()) {
-String schemaName = 
SchemaUtil.getSchemaNameFromFullName(fullName);
-// we don't allow SYSTEM.CATALOG to split within a schema, so 
to ensure each table
-// or view is on a separate region they need to have a unique 
tenant and schema name
-assertTrue("Schema names of tables/view must be unique ", 
schemaNameSet.add(tenantId+"."+schemaName));
-String tableName = 
SchemaUtil.getTableNameFromFullName(fullName);
-splitPoints.add(
-SchemaUtil.getTableKey(tenantId, "".equals(schemaName) ? 
null : schemaName, tableName));
-}
-}
-Collections.sort(splitPoints, Bytes.BYTES_COMPARATOR);
-
+
+protected static void splitTable(TableName fullTableName, List 
splitPoints) throws Exception {
 HBaseAdmin admin =
 driver.getConnectionQueryServices(getUrl(), 
TestUtil.TEST_PROPERTIES).getAdmin();
 assertTrue("Needs at least two split points ", splitPoints.size() > 1);
 assertTrue(
-"Number of split points should be less than or equal to the number 
of region servers ",
-splitPoints.size() <= NUM_SLAVES_BASE);
+"Number of split points should be less than or equal to the 
number of region servers ",
+splitPoints.size() <= NUM_SLAVES_BASE);
 HBaseTestingUtility util = getUtility();
 MiniHBaseCluster cluster = util.getHBaseCluster();
 HMaster master = cluster.getMaster();
@@ -1848,7 +1826,7 @@ public abstract class BaseTest {
 
availableRegionServers.push(util.getHBaseCluster().getRegionServer(i).getServerName());
 }
 List tableRegions =
-
admin.getTableRegions(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME);
+admin.getTableRegions(fullTableName);
 for (HRegionInfo hRegionInfo : tableRegions) {
 // filter on regions we are interested in
 if (regionContainsMetadataRows(hRegionInfo, splitPoints)) {
@@ -1858,15 +1836,6 @@ public abstract class BaseTest {
 }
 serverToRegionsList.get(serverName).add(hRegionInfo);
 availableRegionServers.remove(serverName);
-// Scan scan = new Scan();
-// scan.setStartRow(hRegionInfo.getStartKey());
-// scan.setStopRow(hRegionInfo.getEndKey());
-// HTable primaryTable = new 
HTable(getUtility().getConfiguration(),
-// PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME);
-// ResultScanner resultScanner = primaryTable.getScanner(scan)

phoenix git commit: PHOENIX-4934 Make BaseTest.splitSystemCatalog generic

2018-09-28 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.3 352cde5d1 -> 0020ff20f


PHOENIX-4934 Make BaseTest.splitSystemCatalog generic


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/0020ff20
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/0020ff20
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/0020ff20

Branch: refs/heads/4.x-HBase-1.3
Commit: 0020ff20f2a5f8d8a5fafe735bf7d201b245afdd
Parents: 352cde5
Author: Thomas D'Silva 
Authored: Fri Sep 28 17:56:22 2018 -0700
Committer: Thomas D'Silva 
Committed: Fri Sep 28 17:58:14 2018 -0700

--
 .../java/org/apache/phoenix/query/BaseTest.java | 73 +---
 1 file changed, 34 insertions(+), 39 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/0020ff20/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
--
diff --git a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
index 45b61c1..5ca247b 100644
--- a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
+++ b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
@@ -1804,36 +1804,14 @@ public abstract class BaseTest {
 }
 return false;
 }
-
-/**
- * Splits SYSTEM.CATALOG into multiple regions based on the table or view 
names passed in.
- * Metadata for each table or view is moved to a separate region,
- * @param tenantToTableAndViewMap map from tenant to tables and views 
owned by the tenant
- */
-protected static void splitSystemCatalog(Map> 
tenantToTableAndViewMap) throws Exception  {
-List splitPoints = Lists.newArrayListWithExpectedSize(5);
-// add the rows keys of the table or view metadata rows
-Set schemaNameSet=Sets.newHashSetWithExpectedSize(15);
-for (Entry> entrySet : 
tenantToTableAndViewMap.entrySet()) {
-String tenantId = entrySet.getKey();
-for (String fullName : entrySet.getValue()) {
-String schemaName = 
SchemaUtil.getSchemaNameFromFullName(fullName);
-// we don't allow SYSTEM.CATALOG to split within a schema, so 
to ensure each table
-// or view is on a separate region they need to have a unique 
tenant and schema name
-assertTrue("Schema names of tables/view must be unique ", 
schemaNameSet.add(tenantId+"."+schemaName));
-String tableName = 
SchemaUtil.getTableNameFromFullName(fullName);
-splitPoints.add(
-SchemaUtil.getTableKey(tenantId, "".equals(schemaName) ? 
null : schemaName, tableName));
-}
-}
-Collections.sort(splitPoints, Bytes.BYTES_COMPARATOR);
-
+
+protected static void splitTable(TableName fullTableName, List 
splitPoints) throws Exception {
 HBaseAdmin admin =
 driver.getConnectionQueryServices(getUrl(), 
TestUtil.TEST_PROPERTIES).getAdmin();
 assertTrue("Needs at least two split points ", splitPoints.size() > 1);
 assertTrue(
-"Number of split points should be less than or equal to the number 
of region servers ",
-splitPoints.size() <= NUM_SLAVES_BASE);
+"Number of split points should be less than or equal to the 
number of region servers ",
+splitPoints.size() <= NUM_SLAVES_BASE);
 HBaseTestingUtility util = getUtility();
 MiniHBaseCluster cluster = util.getHBaseCluster();
 HMaster master = cluster.getMaster();
@@ -1848,7 +1826,7 @@ public abstract class BaseTest {
 
availableRegionServers.push(util.getHBaseCluster().getRegionServer(i).getServerName());
 }
 List tableRegions =
-
admin.getTableRegions(PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME);
+admin.getTableRegions(fullTableName);
 for (HRegionInfo hRegionInfo : tableRegions) {
 // filter on regions we are interested in
 if (regionContainsMetadataRows(hRegionInfo, splitPoints)) {
@@ -1858,15 +1836,6 @@ public abstract class BaseTest {
 }
 serverToRegionsList.get(serverName).add(hRegionInfo);
 availableRegionServers.remove(serverName);
-// Scan scan = new Scan();
-// scan.setStartRow(hRegionInfo.getStartKey());
-// scan.setStopRow(hRegionInfo.getEndKey());
-// HTable primaryTable = new 
HTable(getUtility().getConfiguration(),
-// PhoenixDatabaseMetaData.SYSTEM_CATALOG_HBASE_TABLE_NAME);
-// ResultScanner resultScanner = primaryTable.getScanner(scan)

Build failed in Jenkins: Phoenix-4.x-HBase-1.2 #492

2018-09-28 Thread Apache Jenkins Server
See 


Changes:

[tdsilva] PHOENIX-4934 Make BaseTest.splitSystemCatalog generic

--
[...truncated 16.33 KB...]
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-load-balancer:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix:pom:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter. @ line 467, 
column 24
[WARNING] 
[WARNING] It is highly recommended to fix these problems because they threaten 
the stability of your build.
[WARNING] 
[WARNING] For this reason, future Maven versions might no longer support 
building such malformed projects.
[WARNING] 
[INFO] 
[INFO] Reactor Build Order:
[INFO] 
[INFO] Apache Phoenix [pom]
[INFO] Phoenix Core   [jar]
[INFO] Phoenix - Flume[jar]
[INFO] Phoenix - Kafka[jar]
[INFO] Phoenix - Pig  [jar]
[INFO] Phoenix Query Server Client[jar]
[INFO] Phoenix Query Server   [jar]
[INFO] Phoenix - Pherf[jar]
[INFO] Phoenix - Spark[jar]
[INFO] Phoenix - Hive [jar]
[INFO] Phoenix Client [jar]
[INFO] Phoenix Server [jar]
[INFO] Phoenix Assembly   [pom]
[INFO] Phoenix - Tracing Web Application  [jar]
[INFO] Phoenix Load Balancer  [jar]
[INFO] 
[INFO] -< org.apache.phoenix:phoenix >-
[INFO] Building Apache Phoenix 4.14.0-HBase-1.2  [1/15]
[INFO] [ pom ]-
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ phoenix ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.13:check (validate) @ phoenix ---
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ phoenix ---
[INFO] 
[INFO] --- maven-source-plugin:2.2.1:jar-no-fork (attach-sources) @ phoenix ---
[INFO] 
[INFO] --- maven-jar-plugin:2.4:test-jar (default) @ phoenix ---
[WARNING] JAR will be empty - no content was marked for inclusion!
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-site-plugin:3.2:attach-descriptor (attach-descriptor) @ 
phoenix ---
[INFO] 
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ phoenix ---
[INFO] Installing 
 to 
/home/jenkins/.m2/repository/org/apache/phoenix/phoenix/4.14.0-HBase-1.2/phoenix-4.14.0-HBase-1.2.pom
[INFO] Installing 

 to 
/home/jenkins/.m2/repository/org/apache/phoenix/phoenix/4.14.0-HBase-1.2/phoenix-4.14.0-HBase-1.2-tests.jar
[INFO] 
[INFO] --< org.apache.phoenix:phoenix-core >---
[INFO] Building Phoenix Core 4.14.0-HBase-1.2[2/15]
[INFO] [ jar ]-
Downloading from apache release: 
https://repository.apache.org/content/repositories/releases/org/apache/tephra/tephra-hbase-compat-1.1/0.14.0-incubating/tephra-hbase-compat-1.1-0.14.0-incubating.pom
Progress (1): 3.9 kBDownloaded from apache release: 
https://repository.apache.org/content/repositories/releases/org/apache/tephra/tephra-hbase-compat-1.1/0.14.0-incubating/tephra-hbase-compat-1.1-0.14.0-incubating.pom
 (3.9 kB at 3.6 kB/s)
Downloading from apache release: 
https://repository.apache.org/content/repositories/releases/org/apache/tephra/tephra-hbase-compat-1.1-base/0.14.0-incubating/tephra-hbase-compat-1.1-base-0.14.0-incubating.pom
Progress (1): 4.1/5.0 kBProgress (1): 5.0 kBDownloaded 
from apache release: 
https://repository.apache.org/content/repositories/releases/org/apache/tephra/tephra-hbase-compat-1.1-base/0.14.0-incubating/tephra-hbase-compat-1.1-base-0.14.0-incubating.p

Build failed in Jenkins: Phoenix-4.x-HBase-1.2 #493

2018-09-28 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H24 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/phoenix.git # timeout=10
Fetching upstream changes from 
https://git-wip-us.apache.org/repos/asf/phoenix.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/phoenix.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse origin/4.x-HBase-1.2^{commit} # timeout=10
Checking out Revision 5b04764b8648cd0c916bea51c6fd80688d610bfb 
(origin/4.x-HBase-1.2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 5b04764b8648cd0c916bea51c6fd80688d610bfb
Commit message: "PHOENIX-4934 Make BaseTest.splitSystemCatalog generic"
 > git rev-list --no-walk 5b04764b8648cd0c916bea51c6fd80688d610bfb # timeout=10
No emails were triggered.
[EnvInject] - Executing scripts and injecting environment variables after the 
SCM step.
[EnvInject] - Injecting as environment variables the properties content 
MAVEN_OPTS=-Xmx3G

[EnvInject] - Variables injected successfully.
[Phoenix-4.x-HBase-1.2] $ /bin/bash -xe /tmp/jenkins935522074052646407.sh
+ echo 'DELETING ~/.m2/repository/org/apache/htrace. See 
https://issues.apache.org/jira/browse/PHOENIX-1802'
DELETING ~/.m2/repository/org/apache/htrace. See 
https://issues.apache.org/jira/browse/PHOENIX-1802
+ echo 'CURRENT CONTENT:'
CURRENT CONTENT:
+ ls /home/jenkins/.m2/repository/org/apache/htrace
htrace
htrace-core
htrace-core4
htrace-zipkin
[Phoenix-4.x-HBase-1.2] $ /home/jenkins/tools/maven/latest3/bin/mvn -U clean 
install -Dcheckstyle.skip=true
[INFO] Scanning for projects...
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-core:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter. @ 
org.apache.phoenix:phoenix-core:[unknown-version], 
 
line 65, column 23
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-flume:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-kafka:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter. @ 
org.apache.phoenix:phoenix-kafka:[unknown-version], 
 
line 347, column 20
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-pig:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-queryserver-client:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-queryserver:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-pherf:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-spark:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-hive:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-client:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter. @ 
org.apache.phoenix:phoenix-client:[unknown-version],