Apache-Phoenix | Master | Build Successful

2019-07-11 Thread Apache Jenkins Server
Master branch build status Successful
Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/master

Last Successful Compiled Artifacts https://builds.apache.org/job/Phoenix-master/lastSuccessfulBuild/artifact/

Last Complete Test Report https://builds.apache.org/job/Phoenix-master/lastCompletedBuild/testReport/

Changes
[chinmayskulkarni] PHOENIX-5228 use slf4j for logging in phoenix project (addendum)



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Apache-Phoenix | 4.x-HBase-1.3 | Build Successful

2019-07-11 Thread Apache Jenkins Server
4.x-HBase-1.3 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.3

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastCompletedBuild/testReport/

Changes
[chinmayskulkarni] PHOENIX-5382 : Improved performace with Bulk operations over iterations

[chinmayskulkarni] PHOENIX-5228 use slf4j for logging in phoenix project (addendum)



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Jenkins build is back to normal : Phoenix-4.x-HBase-1.3 #467

2019-07-11 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : Phoenix-4.x-HBase-1.4 #212

2019-07-11 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : Phoenix | Master #2451

2019-07-11 Thread Apache Jenkins Server
See 



[phoenix] branch master updated: PHOENIX-5228 use slf4j for logging in phoenix project (addendum)

2019-07-11 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 02304e6  PHOENIX-5228 use slf4j for logging in phoenix project 
(addendum)
02304e6 is described below

commit 02304e6390bcba908af21da2dd124f188b9fc1e4
Author: Xinyi 
AuthorDate: Sun Jun 16 16:34:11 2019 -0700

PHOENIX-5228 use slf4j for logging in phoenix project (addendum)

Signed-off-by: Chinmay Kulkarni 
---
 .../phoenix/end2end/index/MutableIndexIT.java  |   4 +-
 .../hbase/ipc/PhoenixRpcSchedulerFactory.java  |   8 +-
 .../java/org/apache/phoenix/cache/GlobalCache.java |  11 +--
 .../apache/phoenix/cache/ServerCacheClient.java|  19 ++--
 .../org/apache/phoenix/cache/TenantCacheImpl.java  |   6 +-
 .../cache/aggcache/SpillableGroupByCache.java  |   8 +-
 .../org/apache/phoenix/compile/FromCompiler.java   |  11 ++-
 .../GroupedAggregateRegionObserver.java|  16 ++--
 .../phoenix/coprocessor/MetaDataEndpointImpl.java  |   6 +-
 .../coprocessor/MetaDataRegionObserver.java| 100 +++--
 .../coprocessor/PhoenixAccessController.java   |   6 +-
 .../phoenix/coprocessor/TaskRegionObserver.java|  24 ++---
 .../coprocessor/tasks/DropChildViewsTask.java  |   3 +-
 .../coprocessor/tasks/IndexRebuildTask.java|   3 +-
 .../org/apache/phoenix/execute/BaseQueryPlan.java  |   6 +-
 .../org/apache/phoenix/execute/HashJoinPlan.java   |   6 +-
 .../expression/function/CollationKeyFunction.java  |   6 +-
 .../org/apache/phoenix/hbase/index/Indexer.java|  18 ++--
 .../hbase/index/util/IndexManagementUtil.java  |   3 +-
 .../index/write/ParallelWriterIndexCommitter.java  |   1 -
 .../hbase/index/write/RecoveryIndexWriter.java |   3 +-
 .../phoenix/index/PhoenixIndexFailurePolicy.java   |  19 ++--
 .../apache/phoenix/jdbc/PhoenixEmbeddedDriver.java |   5 +-
 .../org/apache/phoenix/jdbc/PhoenixStatement.java  |  12 ++-
 .../apache/phoenix/log/QueryLoggerDisruptor.java   |   5 +-
 .../apache/phoenix/mapreduce/OrphanViewTool.java   |   3 +-
 .../phoenix/mapreduce/PhoenixRecordReader.java |   9 +-
 .../apache/phoenix/mapreduce/index/IndexTool.java  |   7 +-
 .../index/PhoenixIndexImportDirectReducer.java |   3 +-
 .../index/PhoenixIndexPartialBuildMapper.java  |   3 +-
 .../index/PhoenixServerBuildIndexMapper.java   |   4 -
 .../index/automation/PhoenixMRJobSubmitter.java|   3 +-
 .../monitoring/GlobalMetricRegistriesAdapter.java  |   6 +-
 .../phoenix/query/ConnectionQueryServicesImpl.java |  33 ---
 .../schema/stats/DefaultStatisticsCollector.java   |   3 +-
 .../phoenix/schema/stats/StatisticsScanner.java|  12 ++-
 .../java/org/apache/phoenix/trace/TraceReader.java |   4 +-
 .../phoenix/util/EquiDepthStreamHistogram.java |   3 +-
 38 files changed, 236 insertions(+), 166 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
index 43526a2..2f7b1c9 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
@@ -50,12 +50,14 @@ import org.junit.Test;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
 import org.junit.runners.Parameterized.Parameters;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import com.google.common.primitives.Doubles;
 
 @RunWith(Parameterized.class)
 public class MutableIndexIT extends ParallelStatsDisabledIT {
-
+private static final Logger LOGGER = 
LoggerFactory.getLogger(MutableIndexIT.class);
 protected final boolean localIndex;
 private final String tableDDLOptions;
 
diff --git 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcSchedulerFactory.java
 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcSchedulerFactory.java
index fbec7b8..0d15b63 100644
--- 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcSchedulerFactory.java
+++ 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcSchedulerFactory.java
@@ -26,8 +26,6 @@ import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
-import org.slf4j.Marker;
-import org.slf4j.MarkerFactory;
 
 import com.google.common.base.Preconditions;
 
@@ -37,8 +35,8 @@ import com.google.common.base.Preconditions;
  */
 public class PhoenixRpcSchedulerFactory implements RpcSchedulerFactory {
 
-private static final Logger LOGGER = 
LoggerFactory.getLogger(PhoenixRpcSchedulerFactory.class);
-private static final Marker fatal = MarkerFactory.getMarker("FATAL");
+private static final Logger LOGGER =
+   

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5382 : Improved performace with Bulk operations over iterations

2019-07-11 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 4a6b48c  PHOENIX-5382 : Improved performace with Bulk operations over 
iterations
4a6b48c is described below

commit 4a6b48c2b46b60ecade32bad6823d77ff9ca8112
Author: Viraj Jasani 
AuthorDate: Wed Jul 10 16:34:33 2019 +0530

PHOENIX-5382 : Improved performace with Bulk operations over iterations

Signed-off-by: Chinmay Kulkarni 
---
 .../main/java/org/apache/phoenix/compile/FromCompiler.java|  5 +++--
 .../org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java  |  7 +++
 .../phoenix/mapreduce/index/PhoenixIndexImportMapper.java |  6 ++
 .../org/apache/phoenix/query/ConnectionQueryServicesImpl.java |  5 +++--
 .../main/java/org/apache/phoenix/util/CSVCommonsLoader.java   | 11 ++-
 .../src/main/java/org/apache/phoenix/util/Closeables.java |  5 ++---
 .../src/main/java/org/apache/phoenix/util/SQLCloseables.java  | 11 ++-
 7 files changed, 21 insertions(+), 29 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
index 9ed206e..3bc15fd 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
@@ -734,10 +734,11 @@ public class FromCompiler {
 protected PTable addDynamicColumns(List dynColumns, PTable 
theTable)
 throws SQLException {
 if (!dynColumns.isEmpty()) {
-List allcolumns = new ArrayList();
 List existingColumns = theTable.getColumns();
 // Need to skip the salting column, as it's handled in the 
PTable builder call below
-allcolumns.addAll(theTable.getBucketNum() == null ? 
existingColumns : existingColumns.subList(1, existingColumns.size()));
+List allcolumns = new ArrayList<>(
+theTable.getBucketNum() == null ? existingColumns :
+existingColumns.subList(1, 
existingColumns.size()));
 // Position still based on with the salting columns
 int position = existingColumns.size();
 PName defaultFamilyName = 
PNameFactory.newName(SchemaUtil.getEmptyColumnFamily(theTable));
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index a059b54..cc24511 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -744,13 +744,12 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 findAncestorViewsOfIndex(tenantId, schemaName, tableName, 
viewFinderResult,
 table.isNamespaceMapped());
 }
-if (viewFinderResult.getLinks().isEmpty()) {
+List tableViewInfoList = viewFinderResult.getLinks();
+if (tableViewInfoList.isEmpty()) {
 // no need to combine columns for local indexes on regular tables
 return table;
 }
-for (TableInfo viewInfo : viewFinderResult.getLinks()) {
-ancestorList.add(viewInfo);
-}
+ancestorList.addAll(tableViewInfoList);
 List allColumns = Lists.newArrayList();
 List excludedColumns = Lists.newArrayList();
 // add my own columns first in reverse order
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
index b1a14b4..14ffe73 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
@@ -140,13 +140,11 @@ public class PhoenixIndexImportMapper extends 
Mapper cellList : 
mutation.getFamilyCellMap().values()) {
 ListkeyValueList = 
preUpdateProcessor.preUpsert(mutation.getRow(), 
KeyValueUtil.ensureKeyValues(cellList));
-for (KeyValue keyValue : keyValueList) {
-keyValues.add(keyValue);
-}
+keyValues.addAll(keyValueList);
 }
 }
 }
-Collections.sort(keyValues, 
pconn.getKeyValueBuilder().getKeyValueComparator());
+
keyValues.sort(pconn.getKeyValueBuilder().getKeyValueComparator());

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5382 : Improved performace with Bulk operations over iterations

2019-07-11 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new dcae102  PHOENIX-5382 : Improved performace with Bulk operations over 
iterations
dcae102 is described below

commit dcae102aa56a009663ea0dcb9ba86f84052a46ab
Author: Viraj Jasani 
AuthorDate: Wed Jul 10 16:28:06 2019 +0530

PHOENIX-5382 : Improved performace with Bulk operations over iterations

Signed-off-by: Chinmay Kulkarni 
---
 .../main/java/org/apache/phoenix/compile/FromCompiler.java|  5 +++--
 .../org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java  |  7 +++
 .../phoenix/mapreduce/index/PhoenixIndexImportMapper.java |  6 ++
 .../org/apache/phoenix/query/ConnectionQueryServicesImpl.java |  5 +++--
 .../main/java/org/apache/phoenix/util/CSVCommonsLoader.java   | 11 ++-
 .../src/main/java/org/apache/phoenix/util/Closeables.java |  5 ++---
 .../src/main/java/org/apache/phoenix/util/SQLCloseables.java  | 11 ++-
 7 files changed, 21 insertions(+), 29 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
index 3e249ac..2ced6a6 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
@@ -738,10 +738,11 @@ public class FromCompiler {
 protected PTable addDynamicColumns(List dynColumns, PTable 
theTable)
 throws SQLException {
 if (!dynColumns.isEmpty()) {
-List allcolumns = new ArrayList();
 List existingColumns = theTable.getColumns();
 // Need to skip the salting column, as it's handled in the 
PTable builder call below
-allcolumns.addAll(theTable.getBucketNum() == null ? 
existingColumns : existingColumns.subList(1, existingColumns.size()));
+List allcolumns = new ArrayList<>(
+theTable.getBucketNum() == null ? existingColumns :
+existingColumns.subList(1, 
existingColumns.size()));
 // Position still based on with the salting columns
 int position = existingColumns.size();
 PName defaultFamilyName = 
PNameFactory.newName(SchemaUtil.getEmptyColumnFamily(theTable));
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index a059b54..cc24511 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -744,13 +744,12 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 findAncestorViewsOfIndex(tenantId, schemaName, tableName, 
viewFinderResult,
 table.isNamespaceMapped());
 }
-if (viewFinderResult.getLinks().isEmpty()) {
+List tableViewInfoList = viewFinderResult.getLinks();
+if (tableViewInfoList.isEmpty()) {
 // no need to combine columns for local indexes on regular tables
 return table;
 }
-for (TableInfo viewInfo : viewFinderResult.getLinks()) {
-ancestorList.add(viewInfo);
-}
+ancestorList.addAll(tableViewInfoList);
 List allColumns = Lists.newArrayList();
 List excludedColumns = Lists.newArrayList();
 // add my own columns first in reverse order
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
index b1a14b4..14ffe73 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
@@ -140,13 +140,11 @@ public class PhoenixIndexImportMapper extends 
Mapper cellList : 
mutation.getFamilyCellMap().values()) {
 ListkeyValueList = 
preUpdateProcessor.preUpsert(mutation.getRow(), 
KeyValueUtil.ensureKeyValues(cellList));
-for (KeyValue keyValue : keyValueList) {
-keyValues.add(keyValue);
-}
+keyValues.addAll(keyValueList);
 }
 }
 }
-Collections.sort(keyValues, 
pconn.getKeyValueBuilder().getKeyValueComparator());
+
keyValues.sort(pconn.getKeyValueBuilder().getKeyValueComparator());

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5382 : Improved performace with Bulk operations over iterations

2019-07-11 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 5bcb8c3  PHOENIX-5382 : Improved performace with Bulk operations over 
iterations
5bcb8c3 is described below

commit 5bcb8c3fea8080e81b558a0c2a90c9675877957c
Author: Viraj Jasani 
AuthorDate: Wed Jul 10 16:20:21 2019 +0530

PHOENIX-5382 : Improved performace with Bulk operations over iterations

Signed-off-by: Chinmay Kulkarni 
---
 .../main/java/org/apache/phoenix/compile/FromCompiler.java|  5 +++--
 .../org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java  |  7 +++
 .../phoenix/mapreduce/index/PhoenixIndexImportMapper.java |  6 ++
 .../org/apache/phoenix/query/ConnectionQueryServicesImpl.java |  5 +++--
 .../main/java/org/apache/phoenix/util/CSVCommonsLoader.java   | 11 ++-
 .../src/main/java/org/apache/phoenix/util/Closeables.java |  5 ++---
 .../src/main/java/org/apache/phoenix/util/SQLCloseables.java  | 11 ++-
 7 files changed, 21 insertions(+), 29 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
index 9ed206e..3bc15fd 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
@@ -734,10 +734,11 @@ public class FromCompiler {
 protected PTable addDynamicColumns(List dynColumns, PTable 
theTable)
 throws SQLException {
 if (!dynColumns.isEmpty()) {
-List allcolumns = new ArrayList();
 List existingColumns = theTable.getColumns();
 // Need to skip the salting column, as it's handled in the 
PTable builder call below
-allcolumns.addAll(theTable.getBucketNum() == null ? 
existingColumns : existingColumns.subList(1, existingColumns.size()));
+List allcolumns = new ArrayList<>(
+theTable.getBucketNum() == null ? existingColumns :
+existingColumns.subList(1, 
existingColumns.size()));
 // Position still based on with the salting columns
 int position = existingColumns.size();
 PName defaultFamilyName = 
PNameFactory.newName(SchemaUtil.getEmptyColumnFamily(theTable));
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index a059b54..cc24511 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -744,13 +744,12 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 findAncestorViewsOfIndex(tenantId, schemaName, tableName, 
viewFinderResult,
 table.isNamespaceMapped());
 }
-if (viewFinderResult.getLinks().isEmpty()) {
+List tableViewInfoList = viewFinderResult.getLinks();
+if (tableViewInfoList.isEmpty()) {
 // no need to combine columns for local indexes on regular tables
 return table;
 }
-for (TableInfo viewInfo : viewFinderResult.getLinks()) {
-ancestorList.add(viewInfo);
-}
+ancestorList.addAll(tableViewInfoList);
 List allColumns = Lists.newArrayList();
 List excludedColumns = Lists.newArrayList();
 // add my own columns first in reverse order
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
index b1a14b4..14ffe73 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
@@ -140,13 +140,11 @@ public class PhoenixIndexImportMapper extends 
Mapper cellList : 
mutation.getFamilyCellMap().values()) {
 ListkeyValueList = 
preUpdateProcessor.preUpsert(mutation.getRow(), 
KeyValueUtil.ensureKeyValues(cellList));
-for (KeyValue keyValue : keyValueList) {
-keyValues.add(keyValue);
-}
+keyValues.addAll(keyValueList);
 }
 }
 }
-Collections.sort(keyValues, 
pconn.getKeyValueBuilder().getKeyValueComparator());
+
keyValues.sort(pconn.getKeyValueBuilder().getKeyValueComparator());

Build failed in Jenkins: Phoenix Compile Compatibility with HBase #1055

2019-07-11 Thread Apache Jenkins Server
See 


--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on H25 (ubuntu xenial) in workspace 

[Phoenix_Compile_Compat_wHBase] $ /bin/bash /tmp/jenkins8061679490522359328.sh
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 386407
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 6
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 10240
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited
core id : 0
core id : 1
core id : 2
core id : 3
core id : 4
core id : 5
physical id : 0
physical id : 1
MemTotal:   98957636 kB
MemFree:43903656 kB
Filesystem  Size  Used Avail Use% Mounted on
udev 48G 0   48G   0% /dev
tmpfs   9.5G 1018M  8.5G  11% /run
/dev/sda3   3.6T  488G  3.0T  14% /
tmpfs48G 0   48G   0% /dev/shm
tmpfs   5.0M 0  5.0M   0% /run/lock
tmpfs48G 0   48G   0% /sys/fs/cgroup
/dev/sda2   473M  236M  213M  53% /boot
tmpfs   9.5G  4.0K  9.5G   1% /run/user/910
tmpfs   9.5G 0  9.5G   0% /run/user/1000
/dev/loop11  57M   57M 0 100% /snap/snapcraft/3022
/dev/loop4   57M   57M 0 100% /snap/snapcraft/3059
/dev/loop10  55M   55M 0 100% /snap/lxd/10972
/dev/loop7   89M   89M 0 100% /snap/core/7169
/dev/loop8   89M   89M 0 100% /snap/core/7270
/dev/loop2   55M   55M 0 100% /snap/lxd/11098
apache-maven-2.2.1
apache-maven-3.0.4
apache-maven-3.0.5
apache-maven-3.1.1
apache-maven-3.2.1
apache-maven-3.2.5
apache-maven-3.3.3
apache-maven-3.3.9
apache-maven-3.5.0
apache-maven-3.5.2
apache-maven-3.5.4
apache-maven-3.6.0
latest
latest2
latest3


===
Verifying compile level compatibility with HBase 0.98 with Phoenix 
4.x-HBase-0.98
===

Cloning into 'hbase'...
Switched to a new branch '0.98'
Branch 0.98 set up to track remote branch 0.98 from origin.
[ERROR] Plugin org.codehaus.mojo:findbugs-maven-plugin:2.5.2 or one of its 
dependencies could not be resolved: Failed to read artifact descriptor for 
org.codehaus.mojo:findbugs-maven-plugin:jar:2.5.2: Could not transfer artifact 
org.codehaus.mojo:findbugs-maven-plugin:pom:2.5.2 from/to central 
(https://repo.maven.apache.org/maven2): Received fatal alert: protocol_version 
-> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginResolutionException
Build step 'Execute shell' marked build as failure