Build failed in Jenkins: Phoenix | Master #2232

2018-11-06 Thread Apache Jenkins Server
See 


Changes:

[tdsilva] PHOENIX-4996: Refactor PTableImpl to use Builder Pattern

--
[...truncated 138.17 KB...]
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.001 
s - in 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running org.apache.phoenix.end2end.ChangePermissionsIT
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.58 s 
- in org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedMutableNonTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
177.178 s - in 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
177.731 s - in 
org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
[INFO] Running org.apache.phoenix.end2end.ConnectionUtilIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
180.012 s - in 
org.apache.phoenix.end2end.ColumnEncodedMutableNonTxStatsCollectorIT
[INFO] Running org.apache.phoenix.end2end.ColumnEncodedMutableTxStatsCollectorIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.24 s 
- in org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.974 s 
- in org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Running org.apache.phoenix.end2end.CostBasedDecisionIT
[INFO] Running org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.855 s 
- in org.apache.phoenix.end2end.CountDistinctCompressionIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
176.764 s - in org.apache.phoenix.end2end.ColumnEncodedMutableTxStatsCollectorIT
[INFO] Running org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.DropSchemaIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.791 s 
- in org.apache.phoenix.end2end.DropSchemaIT
[INFO] Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 119.823 
s - in org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Running org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 194.697 
s - in org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 479.024 
s - in org.apache.phoenix.end2end.CostBasedDecisionIT
[INFO] Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 189.017 
s - in org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Running org.apache.phoenix.end2end.IndexScrutinyToolIT
[INFO] Running org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.854 s 
- in org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Running 
org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.907 s 
- in org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Running org.apache.phoenix.end2end.IndexToolIT
[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
1,100.99 s - in org.apache.phoenix.end2end.ChangePermissionsIT
[INFO] Running org.apache.phoenix.end2end.MigrateSystemTablesToSystemNamespaceIT
[INFO] Running org.apache.phoenix.end2end.LocalIndexSplitMergeIT
[WARNING] Tests run: 33, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 
414.537 s - in org.apache.phoenix.end2end.IndexScrutinyToolIT
[INFO] Running 
org.apache.phoenix.end2end.NonColumnEncodedImmutableNonTxStatsCollectorIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 430.963 
s - in org.apache.phoenix.end2end.LocalIndexSplitMergeIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 446.972 
s - in org.apache.phoenix.end2end.MigrateSystemTablesToSystemNamespaceIT
[INFO] Running org.apache.phoenix.end2end.PartialResultServerConfigurationIT
[INFO] Running 
org.apache.phoenix.end2end.NonColumnEncodedImmutableTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, 

Build failed in Jenkins: Phoenix-4.x-HBase-1.3 #253

2018-11-06 Thread Apache Jenkins Server
See 


Changes:

[tdsilva] PHOENIX-4996: Refactor PTableImpl to use Builder Pattern

--
[...truncated 113.95 KB...]
[INFO] --- maven-failsafe-plugin:2.20:integration-test 
(NeedTheirOwnClusterTests) @ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running org.apache.phoenix.end2end.ChangePermissionsIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.849 s 
- in 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.998 s 
- in org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedMutableNonTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
173.326 s - in 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
176.671 s - in 
org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
161.055 s - in 
org.apache.phoenix.end2end.ColumnEncodedMutableNonTxStatsCollectorIT
[INFO] Running org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running org.apache.phoenix.end2end.ColumnEncodedMutableTxStatsCollectorIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.19 s 
- in org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.52 s - 
in org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Running org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.799 s 
- in org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Running org.apache.phoenix.end2end.CostBasedDecisionIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
177.948 s - in org.apache.phoenix.end2end.ColumnEncodedMutableTxStatsCollectorIT
[INFO] Running org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.DropSchemaIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.956 s 
- in org.apache.phoenix.end2end.DropSchemaIT
[INFO] Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 117.129 
s - in org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Running org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 178.635 
s - in org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 191.384 
s - in org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 443.132 
s - in org.apache.phoenix.end2end.CostBasedDecisionIT
[INFO] Running org.apache.phoenix.end2end.IndexScrutinyToolIT
[INFO] Running org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Running 
org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.353 s 
- in org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.621 s 
- in org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Running org.apache.phoenix.end2end.IndexToolIT
[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
1,026.454 s - in org.apache.phoenix.end2end.ChangePermissionsIT
[INFO] Running org.apache.phoenix.end2end.MigrateSystemTablesToSystemNamespaceIT
[INFO] Running org.apache.phoenix.end2end.LocalIndexSplitMergeIT
[INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 429.424 
s - in org.apache.phoenix.end2end.IndexScrutinyToolIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 390.396 
s - in org.apache.phoenix.end2end.MigrateSystemTablesToSystemNamespaceIT
[INFO] Running 
org.apache.phoenix.end2end.NonColumnEncodedImmutableNonTxStatsCollectorIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 434.169 
s - in org.apache.phoenix.end2end.LocalIndexSplitMergeIT
[INFO] Running 

Build failed in Jenkins: Phoenix-omid2 #144

2018-11-06 Thread Apache Jenkins Server
See 


Changes:

[jamestaylor] PHOENIX-5004 Fix org.jboss.netty.channel.ChannelException: Failed 
to

--
[...truncated 204.66 KB...]
at 
org.apache.phoenix.tx.FlappingTransactionIT.testExternalTxContext(FlappingTransactionIT.java:285)

[ERROR] 
testInflightDeleteNotSeen[FlappingTransactionIT_transactionProvider=OMID](org.apache.phoenix.tx.FlappingTransactionIT)
  Time elapsed: 2.438 s  <<< ERROR!
java.sql.SQLException: ERROR 523 (42900): Transaction aborted due to conflict 
with other mutations. Transaction 154156039945400 got invalidated
at 
org.apache.phoenix.tx.FlappingTransactionIT.testInflightDeleteNotSeen(FlappingTransactionIT.java:219)
Caused by: org.apache.omid.transaction.RollbackException: Transaction 
154156039945400 got invalidated
at 
org.apache.phoenix.tx.FlappingTransactionIT.testInflightDeleteNotSeen(FlappingTransactionIT.java:219)

[ERROR] 
testDelete[FlappingTransactionIT_transactionProvider=OMID](org.apache.phoenix.tx.FlappingTransactionIT)
  Time elapsed: 2.38 s  <<< ERROR!
java.sql.SQLException: ERROR 523 (42900): Transaction aborted due to conflict 
with other mutations. Transaction 154156040183500 got invalidated
at 
org.apache.phoenix.tx.FlappingTransactionIT.testDelete(FlappingTransactionIT.java:110)
Caused by: org.apache.omid.transaction.RollbackException: Transaction 
154156040183500 got invalidated
at 
org.apache.phoenix.tx.FlappingTransactionIT.testDelete(FlappingTransactionIT.java:110)

[INFO] Running org.apache.phoenix.tx.TransactionIT
[INFO] Running org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.392 s 
- in org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Running org.apache.phoenix.tx.TxCheckpointIT
[INFO] Running org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 103.791 
s - in org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Running org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.045 s 
- in org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 182.39 
s - in org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 50, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 434.452 
s - in org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 78, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 492.136 
s - in org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Errors: 
[ERROR]   FlappingTransactionIT.testDelete:110 » SQL ERROR 523 (42900): 
Transaction abor...
[ERROR]   FlappingTransactionIT.testExternalTxContext:285 » SQL ERROR 523 
(42900): Trans...
[ERROR]   FlappingTransactionIT.testInflightDeleteNotSeen:219 » SQL ERROR 523 
(42900): T...
[ERROR]   FlappingTransactionIT.testInflightUpdateNotSeen:165 » SQL ERROR 523 
(42900): T...
[INFO] 
[ERROR] Tests run: 3505, Failures: 0, Errors: 4, Skipped: 1
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test (HBaseManagedTimeTests) 
@ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test 
(NeedTheirOwnClusterTests) @ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running org.apache.phoenix.end2end.ChangePermissionsIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.762 s 
- in 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.366 s 
- in org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedMutableNonTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
179.819 s - in 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
165.861 s - in 
org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, 

Build failed in Jenkins: Phoenix | Master #2231

2018-11-06 Thread Apache Jenkins Server
See 


Changes:

[tdsilva] PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using

--
[...truncated 145.00 KB...]
[WARNING] Tests run: 33, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 
347.475 s - in org.apache.phoenix.end2end.IndexScrutinyToolIT
[INFO] Running 
org.apache.phoenix.end2end.NonColumnEncodedImmutableNonTxStatsCollectorIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 356.096 
s - in org.apache.phoenix.end2end.MigrateSystemTablesToSystemNamespaceIT
[INFO] Running 
org.apache.phoenix.end2end.NonColumnEncodedImmutableTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
133.743 s - in 
org.apache.phoenix.end2end.NonColumnEncodedImmutableNonTxStatsCollectorIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 443.488 
s - in org.apache.phoenix.end2end.LocalIndexSplitMergeIT
[INFO] Running org.apache.phoenix.end2end.PartialResultServerConfigurationIT
[INFO] Running org.apache.phoenix.end2end.PhoenixDriverIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
134.886 s - in 
org.apache.phoenix.end2end.NonColumnEncodedImmutableTxStatsCollectorIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.106 s 
- in org.apache.phoenix.end2end.PartialResultServerConfigurationIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 81.058 s 
- in org.apache.phoenix.end2end.PhoenixDriverIT
[INFO] Running org.apache.phoenix.end2end.QueryLoggerIT
[INFO] Running org.apache.phoenix.end2end.QueryTimeoutIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.74 s - 
in org.apache.phoenix.end2end.QueryTimeoutIT
[INFO] Running org.apache.phoenix.end2end.QueryWithLimitIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.705 s 
- in org.apache.phoenix.end2end.QueryWithLimitIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.258 s 
- in org.apache.phoenix.end2end.QueryLoggerIT
[INFO] Running org.apache.phoenix.end2end.RebuildIndexConnectionPropsIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.554 s 
- in org.apache.phoenix.end2end.RebuildIndexConnectionPropsIT
[INFO] Running org.apache.phoenix.end2end.RegexBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.RenewLeaseIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.535 s 
- in org.apache.phoenix.end2end.RenewLeaseIT
[INFO] Running org.apache.phoenix.end2end.SequencePointInTimeIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.103 s 
- in org.apache.phoenix.end2end.SequencePointInTimeIT
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 82.081 
s - in org.apache.phoenix.end2end.RegexBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.SpillableGroupByIT
[INFO] Running org.apache.phoenix.end2end.SplitIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.193 s 
- in org.apache.phoenix.end2end.SpillableGroupByIT
[INFO] Running org.apache.phoenix.end2end.StatsEnabledSplitSystemCatalogIT
[INFO] Running 
org.apache.phoenix.end2end.SysTableNamespaceMappedStatsCollectorIT
[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 82.638 
s - in org.apache.phoenix.end2end.StatsEnabledSplitSystemCatalogIT
[INFO] Running org.apache.phoenix.end2end.SystemCatalogCreationOnConnectionIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
130.002 s - in 
org.apache.phoenix.end2end.SysTableNamespaceMappedStatsCollectorIT
[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 232.585 
s - in org.apache.phoenix.end2end.SplitIT
[INFO] Running org.apache.phoenix.end2end.SystemTablePermissionsIT
[INFO] Running org.apache.phoenix.end2end.SystemCatalogIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 66.844 s 
- in org.apache.phoenix.end2end.SystemCatalogIT
[INFO] Running org.apache.phoenix.end2end.TableDDLPermissionsIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 132.931 
s - in org.apache.phoenix.end2end.SystemTablePermissionsIT
[INFO] Running org.apache.phoenix.end2end.TableSnapshotReadsMapReduceIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.171 s 
- in org.apache.phoenix.end2end.TableSnapshotReadsMapReduceIT
[INFO] Tests run: 96, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
1,355.075 s - in org.apache.phoenix.end2end.IndexToolIT
[INFO] Running org.apache.phoenix.end2end.UpdateCacheAcrossDifferentClientsIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.671 s 
- in org.apache.phoenix.end2end.UpdateCacheAcrossDifferentClientsIT
[INFO] Running org.apache.phoenix.end2end.UserDefinedFunctionsIT
[INFO] Running 

Build failed in Jenkins: Phoenix-4.x-HBase-1.3 #252

2018-11-06 Thread Apache Jenkins Server
See 


Changes:

[tdsilva] PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using

--
[...truncated 113.10 KB...]
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test 
(NeedTheirOwnClusterTests) @ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running org.apache.phoenix.end2end.ChangePermissionsIT
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.742 s 
- in 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.403 s 
- in org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedMutableNonTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
154.66 s - in 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
171.939 s - in 
org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
162.455 s - in 
org.apache.phoenix.end2end.ColumnEncodedMutableNonTxStatsCollectorIT
[INFO] Running org.apache.phoenix.end2end.ColumnEncodedMutableTxStatsCollectorIT
[INFO] Running org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 47.163 s 
- in org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.547 s 
- in org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Running org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.566 s 
- in org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Running org.apache.phoenix.end2end.CostBasedDecisionIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
174.753 s - in org.apache.phoenix.end2end.ColumnEncodedMutableTxStatsCollectorIT
[INFO] Running org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.DropSchemaIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.859 s 
- in org.apache.phoenix.end2end.DropSchemaIT
[INFO] Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 99.954 
s - in org.apache.phoenix.end2end.CsvBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Running org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 400.094 
s - in org.apache.phoenix.end2end.CostBasedDecisionIT
[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 183.054 
s - in org.apache.phoenix.end2end.FlappingLocalIndexIT
[INFO] Tests run: 32, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 178.638 
s - in org.apache.phoenix.end2end.IndexExtendedIT
[INFO] Running org.apache.phoenix.end2end.IndexScrutinyToolIT
[INFO] Running org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Running 
org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.843 s 
- in org.apache.phoenix.end2end.IndexToolForPartialBuildIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.431 s 
- in org.apache.phoenix.end2end.IndexToolForPartialBuildWithNamespaceEnabledIT
[INFO] Running org.apache.phoenix.end2end.IndexToolIT
[INFO] Running org.apache.phoenix.end2end.LocalIndexSplitMergeIT
[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
1,004.841 s - in org.apache.phoenix.end2end.ChangePermissionsIT
[INFO] Running org.apache.phoenix.end2end.MigrateSystemTablesToSystemNamespaceIT
[INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 417.751 
s - in org.apache.phoenix.end2end.IndexScrutinyToolIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 355.984 
s - in 

Build failed in Jenkins: Phoenix-omid2 #143

2018-11-06 Thread Apache Jenkins Server
See 


Changes:

[jamestaylor] PHOENIX-5004 Fix org.jboss.netty.channel.ChannelException: Failed 
to

--
[...truncated 205.21 KB...]
java.sql.SQLException: ERROR 523 (42900): Transaction aborted due to conflict 
with other mutations. Transaction 154154480629800 got invalidated
at 
org.apache.phoenix.tx.FlappingTransactionIT.testDelete(FlappingTransactionIT.java:110)
Caused by: org.apache.omid.transaction.RollbackException: Transaction 
154154480629800 got invalidated
at 
org.apache.phoenix.tx.FlappingTransactionIT.testDelete(FlappingTransactionIT.java:110)

[INFO] Running org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 359.177 
s - in org.apache.phoenix.end2end.join.SubqueryIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 100.149 
s - in org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Running org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.591 s 
- in org.apache.phoenix.util.IndexScrutinyIT
[INFO] Running org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 150.887 
s - in org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 50, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 451.563 
s - in org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 78, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 751.648 
s - in org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Errors: 
[ERROR]   FlappingTransactionIT.testDelete:110 » SQL ERROR 523 (42900): 
Transaction abor...
[ERROR]   FlappingTransactionIT.testExternalTxContext:285 » SQL ERROR 523 
(42900): Trans...
[ERROR]   FlappingTransactionIT.testInflightDeleteNotSeen:219 » SQL ERROR 523 
(42900): T...
[ERROR]   FlappingTransactionIT.testInflightUpdateNotSeen:165 » SQL ERROR 523 
(42900): T...
[INFO] 
[ERROR] Tests run: 3505, Failures: 0, Errors: 4, Skipped: 1
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test (HBaseManagedTimeTests) 
@ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test 
(NeedTheirOwnClusterTests) @ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running org.apache.phoenix.end2end.ChangePermissionsIT
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.781 s 
- in 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.398 s 
- in org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedMutableNonTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
189.142 s - in 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
186.359 s - in 
org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
188.007 s - in 
org.apache.phoenix.end2end.ColumnEncodedMutableNonTxStatsCollectorIT
[INFO] Running org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running org.apache.phoenix.end2end.ColumnEncodedMutableTxStatsCollectorIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 47.981 s 
- in org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.708 s 
- in org.apache.phoenix.end2end.ContextClassloaderIT
[INFO] Running org.apache.phoenix.end2end.CostBasedDecisionIT
[INFO] Running org.apache.phoenix.end2end.CountDistinctCompressionIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.695 s 
- in org.apache.phoenix.end2end.CountDistinctCompressionIT
[WARNING] Tests run: 28, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
177.247 s - in org.apache.phoenix.end2end.ColumnEncodedMutableTxStatsCollectorIT
[INFO] Running 

Build failed in Jenkins: Phoenix-4.x-HBase-1.2 #538

2018-11-06 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H21 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/phoenix.git # timeout=10
Fetching upstream changes from 
https://git-wip-us.apache.org/repos/asf/phoenix.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/phoenix.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse origin/4.x-HBase-1.2^{commit} # timeout=10
Checking out Revision 02a6bbce5a210278639e9d223ceb9a16cc645189 
(origin/4.x-HBase-1.2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 02a6bbce5a210278639e9d223ceb9a16cc645189
Commit message: "PHOENIX-4996: Refactor PTableImpl to use Builder Pattern"
 > git rev-list --no-walk 02a6bbce5a210278639e9d223ceb9a16cc645189 # timeout=10
No emails were triggered.
[EnvInject] - Executing scripts and injecting environment variables after the 
SCM step.
[EnvInject] - Injecting as environment variables the properties content 
MAVEN_OPTS=-Xmx3G

[EnvInject] - Variables injected successfully.
[Phoenix-4.x-HBase-1.2] $ /bin/bash -xe /tmp/jenkins5505516018372721453.sh
+ echo 'DELETING ~/.m2/repository/org/apache/htrace. See 
https://issues.apache.org/jira/browse/PHOENIX-1802'
DELETING ~/.m2/repository/org/apache/htrace. See 
https://issues.apache.org/jira/browse/PHOENIX-1802
+ echo 'CURRENT CONTENT:'
CURRENT CONTENT:
+ ls /home/jenkins/.m2/repository/org/apache/htrace
htrace
htrace-core
htrace-core4
[Phoenix-4.x-HBase-1.2] $ /home/jenkins/tools/maven/latest3/bin/mvn -U clean 
install -Dcheckstyle.skip=true
[INFO] Scanning for projects...
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-core:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter. @ 
org.apache.phoenix:phoenix-core:[unknown-version], 
 
line 65, column 23
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-flume:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-kafka:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter. @ 
org.apache.phoenix:phoenix-kafka:[unknown-version], 
 
line 347, column 20
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-pig:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-queryserver-client:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-queryserver:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-pherf:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-spark:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-hive:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-client:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter. @ 
org.apache.phoenix:phoenix-client:[unknown-version], 

Build failed in Jenkins: Phoenix-4.x-HBase-1.2 #537

2018-11-06 Thread Apache Jenkins Server
See 


Changes:

[tdsilva] PHOENIX-4996: Refactor PTableImpl to use Builder Pattern

--
Started by an SCM change
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H21 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/phoenix.git # timeout=10
Fetching upstream changes from 
https://git-wip-us.apache.org/repos/asf/phoenix.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/phoenix.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse origin/4.x-HBase-1.2^{commit} # timeout=10
Checking out Revision 02a6bbce5a210278639e9d223ceb9a16cc645189 
(origin/4.x-HBase-1.2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 02a6bbce5a210278639e9d223ceb9a16cc645189
Commit message: "PHOENIX-4996: Refactor PTableImpl to use Builder Pattern"
 > git rev-list --no-walk c509d58f12b73ba9e24b53d2e9ca0271666a400d # timeout=10
No emails were triggered.
[EnvInject] - Executing scripts and injecting environment variables after the 
SCM step.
[EnvInject] - Injecting as environment variables the properties content 
MAVEN_OPTS=-Xmx3G

[EnvInject] - Variables injected successfully.
[Phoenix-4.x-HBase-1.2] $ /bin/bash -xe /tmp/jenkins1607907165449392212.sh
+ echo 'DELETING ~/.m2/repository/org/apache/htrace. See 
https://issues.apache.org/jira/browse/PHOENIX-1802'
DELETING ~/.m2/repository/org/apache/htrace. See 
https://issues.apache.org/jira/browse/PHOENIX-1802
+ echo 'CURRENT CONTENT:'
CURRENT CONTENT:
+ ls /home/jenkins/.m2/repository/org/apache/htrace
htrace
htrace-core
htrace-core4
[Phoenix-4.x-HBase-1.2] $ /home/jenkins/tools/maven/latest3/bin/mvn -U clean 
install -Dcheckstyle.skip=true
[INFO] Scanning for projects...
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-core:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter. @ 
org.apache.phoenix:phoenix-core:[unknown-version], 
 
line 65, column 23
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-flume:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-kafka:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter. @ 
org.apache.phoenix:phoenix-kafka:[unknown-version], 
 
line 347, column 20
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-pig:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-queryserver-client:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-queryserver:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-pherf:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-spark:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-hive:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-client:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 

[1/2] phoenix git commit: PHOENIX-4996: Refactor PTableImpl to use Builder Pattern

2018-11-06 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/master 6053ee63a -> 11cc13b04


http://git-wip-us.apache.org/repos/asf/phoenix/blob/11cc13b0/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
index 4a38b48..ab19a99 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
@@ -36,6 +36,7 @@ import java.util.Map.Entry;
 
 import javax.annotation.Nonnull;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.client.Delete;
@@ -69,7 +70,6 @@ import org.apache.phoenix.schema.types.PChar;
 import org.apache.phoenix.schema.types.PDataType;
 import org.apache.phoenix.schema.types.PDouble;
 import org.apache.phoenix.schema.types.PFloat;
-import org.apache.phoenix.schema.types.PLong;
 import org.apache.phoenix.schema.types.PVarchar;
 import org.apache.phoenix.transaction.TransactionFactory;
 import org.apache.phoenix.util.ByteUtil;
@@ -102,164 +102,661 @@ import com.google.common.collect.Maps;
 public class PTableImpl implements PTable {
 private static final Integer NO_SALTING = -1;
 
-private PTableKey key;
-private PName name;
-private PName schemaName = PName.EMPTY_NAME;
-private PName tableName = PName.EMPTY_NAME;
-private PName tenantId;
-private PTableType type;
-private PIndexState state;
-private long sequenceNumber;
-private long timeStamp;
-private long indexDisableTimestamp;
+private IndexMaintainer indexMaintainer;
+private ImmutableBytesWritable indexMaintainersPtr;
+
+private final PTableKey key;
+private final PName name;
+private final PName schemaName;
+private final PName tableName;
+private final PName tenantId;
+private final PTableType type;
+private final PIndexState state;
+private final long sequenceNumber;
+private final long timeStamp;
+private final long indexDisableTimestamp;
 // Have MultiMap for String->PColumn (may need family qualifier)
-private List pkColumns;
-private List allColumns;
+private final List pkColumns;
+private final List allColumns;
 // columns that were inherited from a parent table but that were dropped 
in the view
-private List excludedColumns;
-private List families;
-private Map familyByBytes;
-private Map familyByString;
-private ListMultimap columnsByName;
-private Map kvColumnsByQualifiers;
-private PName pkName;
-private Integer bucketNum;
-private RowKeySchema rowKeySchema;
+private final List excludedColumns;
+private final List families;
+private final Map familyByBytes;
+private final Map familyByString;
+private final ListMultimap columnsByName;
+private final Map kvColumnsByQualifiers;
+private final PName pkName;
+private final Integer bucketNum;
+private final RowKeySchema rowKeySchema;
 // Indexes associated with this table.
-private List indexes;
+private final List indexes;
 // Data table name that the index is created on.
-private PName parentName;
-private PName parentSchemaName;
-private PName parentTableName;
-private List physicalNames;
-private boolean isImmutableRows;
-private IndexMaintainer indexMaintainer;
-private ImmutableBytesWritable indexMaintainersPtr;
-private PName defaultFamilyName;
-private String viewStatement;
-private boolean disableWAL;
-private boolean multiTenant;
-private boolean storeNulls;
-private TransactionFactory.Provider transactionProvider;
-private ViewType viewType;
-private PDataType viewIndexType;
-private Long viewIndexId;
-private int estimatedSize;
-private IndexType indexType;
-private int baseColumnCount;
-private boolean rowKeyOrderOptimizable; // TODO: remove when required that 
tables have been upgrade for PHOENIX-2067
-private boolean hasColumnsRequiringUpgrade; // TODO: remove when required 
that tables have been upgrade for PHOENIX-2067
-private int rowTimestampColPos;
-private long updateCacheFrequency;
-private boolean isNamespaceMapped;
-private String autoPartitionSeqName;
-private boolean isAppendOnlySchema;
-private ImmutableStorageScheme immutableStorageScheme;
-private QualifierEncodingScheme qualifierEncodingScheme;
-private EncodedCQCounter encodedCQCounter;
-private Boolean useStatsForParallelization;
-
-public PTableImpl() {
-this.indexes = Collections.emptyList();
-this.physicalNames = Collections.emptyList();
-this.rowKeySchema = RowKeySchema.EMPTY_SCHEMA;
-}
-
-// 

[2/2] phoenix git commit: PHOENIX-4996: Refactor PTableImpl to use Builder Pattern

2018-11-06 Thread tdsilva
PHOENIX-4996: Refactor PTableImpl to use Builder Pattern


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/11cc13b0
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/11cc13b0
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/11cc13b0

Branch: refs/heads/master
Commit: 11cc13b043c9d73c49134f27aef5e4c05dc6f30a
Parents: 6053ee6
Author: Chinmay Kulkarni 
Authored: Wed Oct 24 17:56:22 2018 -0700
Committer: Thomas D'Silva 
Committed: Tue Nov 6 15:18:45 2018 -0800

--
 .../apache/phoenix/compile/DeleteCompiler.java  |6 +-
 .../apache/phoenix/compile/FromCompiler.java|   66 +-
 .../apache/phoenix/compile/JoinCompiler.java|   52 +-
 .../compile/TupleProjectionCompiler.java|   60 +-
 .../apache/phoenix/compile/UnionCompiler.java   |   41 +-
 .../apache/phoenix/compile/UpsertCompiler.java  |   12 +-
 .../coprocessor/MetaDataEndpointImpl.java   |   96 +-
 .../UngroupedAggregateRegionObserver.java   |6 +-
 .../coprocessor/WhereConstantParser.java|3 +-
 .../query/ConnectionlessQueryServicesImpl.java  |9 +-
 .../apache/phoenix/schema/MetaDataClient.java   |  215 ++-
 .../apache/phoenix/schema/PMetaDataImpl.java|   28 +-
 .../org/apache/phoenix/schema/PTableImpl.java   | 1259 +++---
 .../org/apache/phoenix/schema/TableRef.java |   17 +-
 .../phoenix/execute/CorrelatePlanTest.java  |   32 +-
 .../execute/LiteralResultIteratorPlanTest.java  |   33 +-
 16 files changed, 1302 insertions(+), 633 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/11cc13b0/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
index 14ec45d..51366c0 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
@@ -27,7 +27,6 @@ import java.util.Collections;
 import java.util.Iterator;
 import java.util.LinkedHashSet;
 import java.util.List;
-import java.util.Map;
 import java.util.Set;
 
 import org.apache.hadoop.hbase.Cell;
@@ -90,7 +89,6 @@ import org.apache.phoenix.schema.types.PLong;
 import org.apache.phoenix.transaction.PhoenixTransactionProvider.Feature;
 import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.IndexUtil;
-import org.apache.phoenix.util.MetaDataUtil;
 import org.apache.phoenix.util.ScanUtil;
 
 import com.google.common.base.Preconditions;
@@ -616,7 +614,9 @@ public class DeleteCompiler {
 }
 });
 }
-PTable projectedTable = PTableImpl.makePTable(table, 
PTableType.PROJECTED, adjustedProjectedColumns);
+PTable projectedTable = PTableImpl.builderWithColumns(table, 
adjustedProjectedColumns)
+.setType(PTableType.PROJECTED)
+.build();
 final TableRef projectedTableRef = new TableRef(projectedTable, 
targetTableRef.getLowerBoundTimeStamp(), targetTableRef.getTimeStamp());
 
 QueryPlan bestPlanToBe = dataPlan;

http://git-wip-us.apache.org/repos/asf/phoenix/blob/11cc13b0/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
index 80648a3..d0a49cc 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
@@ -32,8 +32,6 @@ import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.coprocessor.MetaDataProtocol;
 import org.apache.phoenix.coprocessor.MetaDataProtocol.MetaDataMutationResult;
 import org.apache.phoenix.coprocessor.MetaDataProtocol.MutationCode;
-import org.apache.phoenix.exception.SQLExceptionCode;
-import org.apache.phoenix.exception.SQLExceptionInfo;
 import org.apache.phoenix.expression.Expression;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.parse.AliasedNode;
@@ -82,6 +80,7 @@ import org.apache.phoenix.schema.PTableImpl;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.ProjectedColumn;
+import org.apache.phoenix.schema.RowKeySchema;
 import org.apache.phoenix.schema.SchemaNotFoundException;
 import org.apache.phoenix.schema.SortOrder;
 import org.apache.phoenix.schema.TableNotFoundException;
@@ -284,7 +283,8 @@ public class 

[2/2] phoenix git commit: PHOENIX-4996: Refactor PTableImpl to use Builder Pattern

2018-11-06 Thread tdsilva
PHOENIX-4996: Refactor PTableImpl to use Builder Pattern


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/ee8db198
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/ee8db198
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/ee8db198

Branch: refs/heads/4.x-HBase-1.4
Commit: ee8db198a3586a83593e4610b2ea8349f9371c1f
Parents: 51d38d7
Author: Chinmay Kulkarni 
Authored: Fri Nov 2 14:00:09 2018 -0700
Committer: Thomas D'Silva 
Committed: Tue Nov 6 15:19:16 2018 -0800

--
 .../apache/phoenix/compile/DeleteCompiler.java  |5 +-
 .../apache/phoenix/compile/FromCompiler.java|   66 +-
 .../apache/phoenix/compile/JoinCompiler.java|   53 +-
 .../compile/TupleProjectionCompiler.java|   60 +-
 .../apache/phoenix/compile/UnionCompiler.java   |   41 +-
 .../apache/phoenix/compile/UpsertCompiler.java  |   12 +-
 .../coprocessor/MetaDataEndpointImpl.java   |   96 +-
 .../UngroupedAggregateRegionObserver.java   |6 +-
 .../coprocessor/WhereConstantParser.java|3 +-
 .../query/ConnectionlessQueryServicesImpl.java  |9 +-
 .../apache/phoenix/schema/MetaDataClient.java   |  215 ++-
 .../apache/phoenix/schema/PMetaDataImpl.java|   28 +-
 .../org/apache/phoenix/schema/PTableImpl.java   | 1259 +++---
 .../org/apache/phoenix/schema/TableRef.java |   17 +-
 .../phoenix/execute/CorrelatePlanTest.java  |   32 +-
 .../execute/LiteralResultIteratorPlanTest.java  |   33 +-
 16 files changed, 1303 insertions(+), 632 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/ee8db198/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
index 583085e..8c9a930 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
@@ -89,7 +89,6 @@ import org.apache.phoenix.schema.types.PLong;
 import org.apache.phoenix.transaction.PhoenixTransactionProvider.Feature;
 import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.IndexUtil;
-import org.apache.phoenix.util.MetaDataUtil;
 import org.apache.phoenix.util.ScanUtil;
 
 import com.google.common.base.Preconditions;
@@ -615,7 +614,9 @@ public class DeleteCompiler {
 }
 });
 }
-PTable projectedTable = PTableImpl.makePTable(table, 
PTableType.PROJECTED, adjustedProjectedColumns);
+PTable projectedTable = PTableImpl.builderWithColumns(table, 
adjustedProjectedColumns)
+.setType(PTableType.PROJECTED)
+.build();
 final TableRef projectedTableRef = new TableRef(projectedTable, 
targetTableRef.getLowerBoundTimeStamp(), targetTableRef.getTimeStamp());
 
 QueryPlan bestPlanToBe = dataPlan;

http://git-wip-us.apache.org/repos/asf/phoenix/blob/ee8db198/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
index efc66a9..2701af0 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
@@ -32,8 +32,6 @@ import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.coprocessor.MetaDataProtocol;
 import org.apache.phoenix.coprocessor.MetaDataProtocol.MetaDataMutationResult;
 import org.apache.phoenix.coprocessor.MetaDataProtocol.MutationCode;
-import org.apache.phoenix.exception.SQLExceptionCode;
-import org.apache.phoenix.exception.SQLExceptionInfo;
 import org.apache.phoenix.expression.Expression;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.parse.AliasedNode;
@@ -82,6 +80,7 @@ import org.apache.phoenix.schema.PTableImpl;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.ProjectedColumn;
+import org.apache.phoenix.schema.RowKeySchema;
 import org.apache.phoenix.schema.SchemaNotFoundException;
 import org.apache.phoenix.schema.SortOrder;
 import org.apache.phoenix.schema.TableNotFoundException;
@@ -284,7 +283,8 @@ public class FromCompiler {
 column.getTimestamp());
 projectedColumns.add(projectedColumn);
 }
-PTable t = PTableImpl.makePTable(table, projectedColumns);
+PTable t = 

[1/2] phoenix git commit: PHOENIX-4996: Refactor PTableImpl to use Builder Pattern

2018-11-06 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 c509d58f1 -> 02a6bbce5


http://git-wip-us.apache.org/repos/asf/phoenix/blob/02a6bbce/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
index 9f06e04..7939b97 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
@@ -36,6 +36,7 @@ import java.util.Map.Entry;
 
 import javax.annotation.Nonnull;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.client.Delete;
@@ -69,7 +70,6 @@ import org.apache.phoenix.schema.types.PChar;
 import org.apache.phoenix.schema.types.PDataType;
 import org.apache.phoenix.schema.types.PDouble;
 import org.apache.phoenix.schema.types.PFloat;
-import org.apache.phoenix.schema.types.PLong;
 import org.apache.phoenix.schema.types.PVarchar;
 import org.apache.phoenix.transaction.TransactionFactory;
 import org.apache.phoenix.util.ByteUtil;
@@ -102,164 +102,661 @@ import com.google.common.collect.Maps;
 public class PTableImpl implements PTable {
 private static final Integer NO_SALTING = -1;
 
-private PTableKey key;
-private PName name;
-private PName schemaName = PName.EMPTY_NAME;
-private PName tableName = PName.EMPTY_NAME;
-private PName tenantId;
-private PTableType type;
-private PIndexState state;
-private long sequenceNumber;
-private long timeStamp;
-private long indexDisableTimestamp;
+private IndexMaintainer indexMaintainer;
+private ImmutableBytesWritable indexMaintainersPtr;
+
+private final PTableKey key;
+private final PName name;
+private final PName schemaName;
+private final PName tableName;
+private final PName tenantId;
+private final PTableType type;
+private final PIndexState state;
+private final long sequenceNumber;
+private final long timeStamp;
+private final long indexDisableTimestamp;
 // Have MultiMap for String->PColumn (may need family qualifier)
-private List pkColumns;
-private List allColumns;
+private final List pkColumns;
+private final List allColumns;
 // columns that were inherited from a parent table but that were dropped 
in the view
-private List excludedColumns;
-private List families;
-private Map familyByBytes;
-private Map familyByString;
-private ListMultimap columnsByName;
-private Map kvColumnsByQualifiers;
-private PName pkName;
-private Integer bucketNum;
-private RowKeySchema rowKeySchema;
+private final List excludedColumns;
+private final List families;
+private final Map familyByBytes;
+private final Map familyByString;
+private final ListMultimap columnsByName;
+private final Map kvColumnsByQualifiers;
+private final PName pkName;
+private final Integer bucketNum;
+private final RowKeySchema rowKeySchema;
 // Indexes associated with this table.
-private List indexes;
+private final List indexes;
 // Data table name that the index is created on.
-private PName parentName;
-private PName parentSchemaName;
-private PName parentTableName;
-private List physicalNames;
-private boolean isImmutableRows;
-private IndexMaintainer indexMaintainer;
-private ImmutableBytesWritable indexMaintainersPtr;
-private PName defaultFamilyName;
-private String viewStatement;
-private boolean disableWAL;
-private boolean multiTenant;
-private boolean storeNulls;
-private TransactionFactory.Provider transactionProvider;
-private ViewType viewType;
-private PDataType viewIndexType;
-private Long viewIndexId;
-private int estimatedSize;
-private IndexType indexType;
-private int baseColumnCount;
-private boolean rowKeyOrderOptimizable; // TODO: remove when required that 
tables have been upgrade for PHOENIX-2067
-private boolean hasColumnsRequiringUpgrade; // TODO: remove when required 
that tables have been upgrade for PHOENIX-2067
-private int rowTimestampColPos;
-private long updateCacheFrequency;
-private boolean isNamespaceMapped;
-private String autoPartitionSeqName;
-private boolean isAppendOnlySchema;
-private ImmutableStorageScheme immutableStorageScheme;
-private QualifierEncodingScheme qualifierEncodingScheme;
-private EncodedCQCounter encodedCQCounter;
-private Boolean useStatsForParallelization;
-
-public PTableImpl() {
-this.indexes = Collections.emptyList();
-this.physicalNames = Collections.emptyList();
-this.rowKeySchema = RowKeySchema.EMPTY_SCHEMA;
-}
-
-

[1/2] phoenix git commit: PHOENIX-4996: Refactor PTableImpl to use Builder Pattern

2018-11-06 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.4 51d38d7fc -> ee8db198a


http://git-wip-us.apache.org/repos/asf/phoenix/blob/ee8db198/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
index 9f06e04..7939b97 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
@@ -36,6 +36,7 @@ import java.util.Map.Entry;
 
 import javax.annotation.Nonnull;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.client.Delete;
@@ -69,7 +70,6 @@ import org.apache.phoenix.schema.types.PChar;
 import org.apache.phoenix.schema.types.PDataType;
 import org.apache.phoenix.schema.types.PDouble;
 import org.apache.phoenix.schema.types.PFloat;
-import org.apache.phoenix.schema.types.PLong;
 import org.apache.phoenix.schema.types.PVarchar;
 import org.apache.phoenix.transaction.TransactionFactory;
 import org.apache.phoenix.util.ByteUtil;
@@ -102,164 +102,661 @@ import com.google.common.collect.Maps;
 public class PTableImpl implements PTable {
 private static final Integer NO_SALTING = -1;
 
-private PTableKey key;
-private PName name;
-private PName schemaName = PName.EMPTY_NAME;
-private PName tableName = PName.EMPTY_NAME;
-private PName tenantId;
-private PTableType type;
-private PIndexState state;
-private long sequenceNumber;
-private long timeStamp;
-private long indexDisableTimestamp;
+private IndexMaintainer indexMaintainer;
+private ImmutableBytesWritable indexMaintainersPtr;
+
+private final PTableKey key;
+private final PName name;
+private final PName schemaName;
+private final PName tableName;
+private final PName tenantId;
+private final PTableType type;
+private final PIndexState state;
+private final long sequenceNumber;
+private final long timeStamp;
+private final long indexDisableTimestamp;
 // Have MultiMap for String->PColumn (may need family qualifier)
-private List pkColumns;
-private List allColumns;
+private final List pkColumns;
+private final List allColumns;
 // columns that were inherited from a parent table but that were dropped 
in the view
-private List excludedColumns;
-private List families;
-private Map familyByBytes;
-private Map familyByString;
-private ListMultimap columnsByName;
-private Map kvColumnsByQualifiers;
-private PName pkName;
-private Integer bucketNum;
-private RowKeySchema rowKeySchema;
+private final List excludedColumns;
+private final List families;
+private final Map familyByBytes;
+private final Map familyByString;
+private final ListMultimap columnsByName;
+private final Map kvColumnsByQualifiers;
+private final PName pkName;
+private final Integer bucketNum;
+private final RowKeySchema rowKeySchema;
 // Indexes associated with this table.
-private List indexes;
+private final List indexes;
 // Data table name that the index is created on.
-private PName parentName;
-private PName parentSchemaName;
-private PName parentTableName;
-private List physicalNames;
-private boolean isImmutableRows;
-private IndexMaintainer indexMaintainer;
-private ImmutableBytesWritable indexMaintainersPtr;
-private PName defaultFamilyName;
-private String viewStatement;
-private boolean disableWAL;
-private boolean multiTenant;
-private boolean storeNulls;
-private TransactionFactory.Provider transactionProvider;
-private ViewType viewType;
-private PDataType viewIndexType;
-private Long viewIndexId;
-private int estimatedSize;
-private IndexType indexType;
-private int baseColumnCount;
-private boolean rowKeyOrderOptimizable; // TODO: remove when required that 
tables have been upgrade for PHOENIX-2067
-private boolean hasColumnsRequiringUpgrade; // TODO: remove when required 
that tables have been upgrade for PHOENIX-2067
-private int rowTimestampColPos;
-private long updateCacheFrequency;
-private boolean isNamespaceMapped;
-private String autoPartitionSeqName;
-private boolean isAppendOnlySchema;
-private ImmutableStorageScheme immutableStorageScheme;
-private QualifierEncodingScheme qualifierEncodingScheme;
-private EncodedCQCounter encodedCQCounter;
-private Boolean useStatsForParallelization;
-
-public PTableImpl() {
-this.indexes = Collections.emptyList();
-this.physicalNames = Collections.emptyList();
-this.rowKeySchema = RowKeySchema.EMPTY_SCHEMA;
-}
-
-

[2/2] phoenix git commit: PHOENIX-4996: Refactor PTableImpl to use Builder Pattern

2018-11-06 Thread tdsilva
PHOENIX-4996: Refactor PTableImpl to use Builder Pattern


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/d6083ae5
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/d6083ae5
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/d6083ae5

Branch: refs/heads/4.x-HBase-1.3
Commit: d6083ae5c598f5293adbfb24370b8e40962bc3d7
Parents: 9bfaf18
Author: Chinmay Kulkarni 
Authored: Fri Nov 2 14:00:09 2018 -0700
Committer: Thomas D'Silva 
Committed: Tue Nov 6 15:19:09 2018 -0800

--
 .../apache/phoenix/compile/DeleteCompiler.java  |5 +-
 .../apache/phoenix/compile/FromCompiler.java|   66 +-
 .../apache/phoenix/compile/JoinCompiler.java|   53 +-
 .../compile/TupleProjectionCompiler.java|   60 +-
 .../apache/phoenix/compile/UnionCompiler.java   |   41 +-
 .../apache/phoenix/compile/UpsertCompiler.java  |   12 +-
 .../coprocessor/MetaDataEndpointImpl.java   |   96 +-
 .../UngroupedAggregateRegionObserver.java   |6 +-
 .../coprocessor/WhereConstantParser.java|3 +-
 .../query/ConnectionlessQueryServicesImpl.java  |9 +-
 .../apache/phoenix/schema/MetaDataClient.java   |  215 ++-
 .../apache/phoenix/schema/PMetaDataImpl.java|   28 +-
 .../org/apache/phoenix/schema/PTableImpl.java   | 1259 +++---
 .../org/apache/phoenix/schema/TableRef.java |   17 +-
 .../phoenix/execute/CorrelatePlanTest.java  |   32 +-
 .../execute/LiteralResultIteratorPlanTest.java  |   33 +-
 16 files changed, 1303 insertions(+), 632 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/d6083ae5/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
index 583085e..8c9a930 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
@@ -89,7 +89,6 @@ import org.apache.phoenix.schema.types.PLong;
 import org.apache.phoenix.transaction.PhoenixTransactionProvider.Feature;
 import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.IndexUtil;
-import org.apache.phoenix.util.MetaDataUtil;
 import org.apache.phoenix.util.ScanUtil;
 
 import com.google.common.base.Preconditions;
@@ -615,7 +614,9 @@ public class DeleteCompiler {
 }
 });
 }
-PTable projectedTable = PTableImpl.makePTable(table, 
PTableType.PROJECTED, adjustedProjectedColumns);
+PTable projectedTable = PTableImpl.builderWithColumns(table, 
adjustedProjectedColumns)
+.setType(PTableType.PROJECTED)
+.build();
 final TableRef projectedTableRef = new TableRef(projectedTable, 
targetTableRef.getLowerBoundTimeStamp(), targetTableRef.getTimeStamp());
 
 QueryPlan bestPlanToBe = dataPlan;

http://git-wip-us.apache.org/repos/asf/phoenix/blob/d6083ae5/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
index efc66a9..2701af0 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
@@ -32,8 +32,6 @@ import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.coprocessor.MetaDataProtocol;
 import org.apache.phoenix.coprocessor.MetaDataProtocol.MetaDataMutationResult;
 import org.apache.phoenix.coprocessor.MetaDataProtocol.MutationCode;
-import org.apache.phoenix.exception.SQLExceptionCode;
-import org.apache.phoenix.exception.SQLExceptionInfo;
 import org.apache.phoenix.expression.Expression;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.parse.AliasedNode;
@@ -82,6 +80,7 @@ import org.apache.phoenix.schema.PTableImpl;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.ProjectedColumn;
+import org.apache.phoenix.schema.RowKeySchema;
 import org.apache.phoenix.schema.SchemaNotFoundException;
 import org.apache.phoenix.schema.SortOrder;
 import org.apache.phoenix.schema.TableNotFoundException;
@@ -284,7 +283,8 @@ public class FromCompiler {
 column.getTimestamp());
 projectedColumns.add(projectedColumn);
 }
-PTable t = PTableImpl.makePTable(table, projectedColumns);
+PTable t = 

[1/2] phoenix git commit: PHOENIX-4996: Refactor PTableImpl to use Builder Pattern

2018-11-06 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.3 9bfaf183a -> d6083ae5c


http://git-wip-us.apache.org/repos/asf/phoenix/blob/d6083ae5/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
index 9f06e04..7939b97 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
@@ -36,6 +36,7 @@ import java.util.Map.Entry;
 
 import javax.annotation.Nonnull;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.client.Delete;
@@ -69,7 +70,6 @@ import org.apache.phoenix.schema.types.PChar;
 import org.apache.phoenix.schema.types.PDataType;
 import org.apache.phoenix.schema.types.PDouble;
 import org.apache.phoenix.schema.types.PFloat;
-import org.apache.phoenix.schema.types.PLong;
 import org.apache.phoenix.schema.types.PVarchar;
 import org.apache.phoenix.transaction.TransactionFactory;
 import org.apache.phoenix.util.ByteUtil;
@@ -102,164 +102,661 @@ import com.google.common.collect.Maps;
 public class PTableImpl implements PTable {
 private static final Integer NO_SALTING = -1;
 
-private PTableKey key;
-private PName name;
-private PName schemaName = PName.EMPTY_NAME;
-private PName tableName = PName.EMPTY_NAME;
-private PName tenantId;
-private PTableType type;
-private PIndexState state;
-private long sequenceNumber;
-private long timeStamp;
-private long indexDisableTimestamp;
+private IndexMaintainer indexMaintainer;
+private ImmutableBytesWritable indexMaintainersPtr;
+
+private final PTableKey key;
+private final PName name;
+private final PName schemaName;
+private final PName tableName;
+private final PName tenantId;
+private final PTableType type;
+private final PIndexState state;
+private final long sequenceNumber;
+private final long timeStamp;
+private final long indexDisableTimestamp;
 // Have MultiMap for String->PColumn (may need family qualifier)
-private List pkColumns;
-private List allColumns;
+private final List pkColumns;
+private final List allColumns;
 // columns that were inherited from a parent table but that were dropped 
in the view
-private List excludedColumns;
-private List families;
-private Map familyByBytes;
-private Map familyByString;
-private ListMultimap columnsByName;
-private Map kvColumnsByQualifiers;
-private PName pkName;
-private Integer bucketNum;
-private RowKeySchema rowKeySchema;
+private final List excludedColumns;
+private final List families;
+private final Map familyByBytes;
+private final Map familyByString;
+private final ListMultimap columnsByName;
+private final Map kvColumnsByQualifiers;
+private final PName pkName;
+private final Integer bucketNum;
+private final RowKeySchema rowKeySchema;
 // Indexes associated with this table.
-private List indexes;
+private final List indexes;
 // Data table name that the index is created on.
-private PName parentName;
-private PName parentSchemaName;
-private PName parentTableName;
-private List physicalNames;
-private boolean isImmutableRows;
-private IndexMaintainer indexMaintainer;
-private ImmutableBytesWritable indexMaintainersPtr;
-private PName defaultFamilyName;
-private String viewStatement;
-private boolean disableWAL;
-private boolean multiTenant;
-private boolean storeNulls;
-private TransactionFactory.Provider transactionProvider;
-private ViewType viewType;
-private PDataType viewIndexType;
-private Long viewIndexId;
-private int estimatedSize;
-private IndexType indexType;
-private int baseColumnCount;
-private boolean rowKeyOrderOptimizable; // TODO: remove when required that 
tables have been upgrade for PHOENIX-2067
-private boolean hasColumnsRequiringUpgrade; // TODO: remove when required 
that tables have been upgrade for PHOENIX-2067
-private int rowTimestampColPos;
-private long updateCacheFrequency;
-private boolean isNamespaceMapped;
-private String autoPartitionSeqName;
-private boolean isAppendOnlySchema;
-private ImmutableStorageScheme immutableStorageScheme;
-private QualifierEncodingScheme qualifierEncodingScheme;
-private EncodedCQCounter encodedCQCounter;
-private Boolean useStatsForParallelization;
-
-public PTableImpl() {
-this.indexes = Collections.emptyList();
-this.physicalNames = Collections.emptyList();
-this.rowKeySchema = RowKeySchema.EMPTY_SCHEMA;
-}
-
-

[2/2] phoenix git commit: PHOENIX-4996: Refactor PTableImpl to use Builder Pattern

2018-11-06 Thread tdsilva
PHOENIX-4996: Refactor PTableImpl to use Builder Pattern


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/02a6bbce
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/02a6bbce
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/02a6bbce

Branch: refs/heads/4.x-HBase-1.2
Commit: 02a6bbce5a210278639e9d223ceb9a16cc645189
Parents: c509d58
Author: Chinmay Kulkarni 
Authored: Fri Nov 2 14:00:09 2018 -0700
Committer: Thomas D'Silva 
Committed: Tue Nov 6 15:19:05 2018 -0800

--
 .../apache/phoenix/compile/DeleteCompiler.java  |5 +-
 .../apache/phoenix/compile/FromCompiler.java|   66 +-
 .../apache/phoenix/compile/JoinCompiler.java|   53 +-
 .../compile/TupleProjectionCompiler.java|   60 +-
 .../apache/phoenix/compile/UnionCompiler.java   |   41 +-
 .../apache/phoenix/compile/UpsertCompiler.java  |   12 +-
 .../coprocessor/MetaDataEndpointImpl.java   |   96 +-
 .../UngroupedAggregateRegionObserver.java   |6 +-
 .../coprocessor/WhereConstantParser.java|3 +-
 .../query/ConnectionlessQueryServicesImpl.java  |9 +-
 .../apache/phoenix/schema/MetaDataClient.java   |  215 ++-
 .../apache/phoenix/schema/PMetaDataImpl.java|   28 +-
 .../org/apache/phoenix/schema/PTableImpl.java   | 1259 +++---
 .../org/apache/phoenix/schema/TableRef.java |   17 +-
 .../phoenix/execute/CorrelatePlanTest.java  |   32 +-
 .../execute/LiteralResultIteratorPlanTest.java  |   33 +-
 16 files changed, 1303 insertions(+), 632 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/02a6bbce/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
index 583085e..8c9a930 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
@@ -89,7 +89,6 @@ import org.apache.phoenix.schema.types.PLong;
 import org.apache.phoenix.transaction.PhoenixTransactionProvider.Feature;
 import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.IndexUtil;
-import org.apache.phoenix.util.MetaDataUtil;
 import org.apache.phoenix.util.ScanUtil;
 
 import com.google.common.base.Preconditions;
@@ -615,7 +614,9 @@ public class DeleteCompiler {
 }
 });
 }
-PTable projectedTable = PTableImpl.makePTable(table, 
PTableType.PROJECTED, adjustedProjectedColumns);
+PTable projectedTable = PTableImpl.builderWithColumns(table, 
adjustedProjectedColumns)
+.setType(PTableType.PROJECTED)
+.build();
 final TableRef projectedTableRef = new TableRef(projectedTable, 
targetTableRef.getLowerBoundTimeStamp(), targetTableRef.getTimeStamp());
 
 QueryPlan bestPlanToBe = dataPlan;

http://git-wip-us.apache.org/repos/asf/phoenix/blob/02a6bbce/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
index efc66a9..2701af0 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
@@ -32,8 +32,6 @@ import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.coprocessor.MetaDataProtocol;
 import org.apache.phoenix.coprocessor.MetaDataProtocol.MetaDataMutationResult;
 import org.apache.phoenix.coprocessor.MetaDataProtocol.MutationCode;
-import org.apache.phoenix.exception.SQLExceptionCode;
-import org.apache.phoenix.exception.SQLExceptionInfo;
 import org.apache.phoenix.expression.Expression;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.parse.AliasedNode;
@@ -82,6 +80,7 @@ import org.apache.phoenix.schema.PTableImpl;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.ProjectedColumn;
+import org.apache.phoenix.schema.RowKeySchema;
 import org.apache.phoenix.schema.SchemaNotFoundException;
 import org.apache.phoenix.schema.SortOrder;
 import org.apache.phoenix.schema.TableNotFoundException;
@@ -284,7 +283,8 @@ public class FromCompiler {
 column.getTimestamp());
 projectedColumns.add(projectedColumn);
 }
-PTable t = PTableImpl.makePTable(table, projectedColumns);
+PTable t = 

Build failed in Jenkins: Phoenix-4.x-HBase-1.2 #536

2018-11-06 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H21 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/phoenix.git # timeout=10
Fetching upstream changes from 
https://git-wip-us.apache.org/repos/asf/phoenix.git
 > git --version # timeout=10
 > git fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/phoenix.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse origin/4.x-HBase-1.2^{commit} # timeout=10
Checking out Revision c509d58f12b73ba9e24b53d2e9ca0271666a400d 
(origin/4.x-HBase-1.2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f c509d58f12b73ba9e24b53d2e9ca0271666a400d
Commit message: "PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted 
tables using phoenix-spark"
 > git rev-list --no-walk c509d58f12b73ba9e24b53d2e9ca0271666a400d # timeout=10
No emails were triggered.
[EnvInject] - Executing scripts and injecting environment variables after the 
SCM step.
[EnvInject] - Injecting as environment variables the properties content 
MAVEN_OPTS=-Xmx3G

[EnvInject] - Variables injected successfully.
[Phoenix-4.x-HBase-1.2] $ /bin/bash -xe /tmp/jenkins1260528070483432507.sh
+ echo 'DELETING ~/.m2/repository/org/apache/htrace. See 
https://issues.apache.org/jira/browse/PHOENIX-1802'
DELETING ~/.m2/repository/org/apache/htrace. See 
https://issues.apache.org/jira/browse/PHOENIX-1802
+ echo 'CURRENT CONTENT:'
CURRENT CONTENT:
+ ls /home/jenkins/.m2/repository/org/apache/htrace
htrace
htrace-core
htrace-core4
[Phoenix-4.x-HBase-1.2] $ /home/jenkins/tools/maven/latest3/bin/mvn -U clean 
install -Dcheckstyle.skip=true
[INFO] Scanning for projects...
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-core:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter. @ 
org.apache.phoenix:phoenix-core:[unknown-version], 
 
line 65, column 23
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-flume:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-kafka:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter. @ 
org.apache.phoenix:phoenix-kafka:[unknown-version], 
 
line 347, column 20
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-pig:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-queryserver-client:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-queryserver:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-pherf:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-spark:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-hive:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-client:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter. @ 

Build failed in Jenkins: Phoenix-4.x-HBase-1.2 #535

2018-11-06 Thread Apache Jenkins Server
See 


Changes:

[tdsilva] PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using

--
[...truncated 1.99 KB...]
[INFO] Scanning for projects...
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-core:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter. @ 
org.apache.phoenix:phoenix-core:[unknown-version], 
 
line 65, column 23
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-flume:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-kafka:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter. @ 
org.apache.phoenix:phoenix-kafka:[unknown-version], 
 
line 347, column 20
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-pig:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-queryserver-client:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-queryserver:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-pherf:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-spark:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-hive:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-client:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter. @ 
org.apache.phoenix:phoenix-client:[unknown-version], 

 line 52, column 24
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-server:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter. @ 
org.apache.phoenix:phoenix-server:[unknown-version], 

 line 50, column 24
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-assembly:pom:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-tracing-webapp:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix-load-balancer:jar:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter.
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.phoenix:phoenix:pom:4.14.0-HBase-1.2
[WARNING] Reporting configuration should be done in  section, not in 
maven-site-plugin  as reportPlugins parameter. @ line 467, 
column 24
[WARNING] 
[WARNING] It is highly recommended to fix these problems because they 

[3/6] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-06 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/6053ee63/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
index 578a3af..792d08f 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
@@ -17,17 +17,7 @@
  */
 package org.apache.phoenix.end2end;
 
-import static org.apache.phoenix.util.TestUtil.ROW1;
-import static org.apache.phoenix.util.TestUtil.ROW2;
-import static org.apache.phoenix.util.TestUtil.ROW3;
-import static org.apache.phoenix.util.TestUtil.ROW4;
-import static org.apache.phoenix.util.TestUtil.ROW5;
-import static org.apache.phoenix.util.TestUtil.ROW6;
-import static org.apache.phoenix.util.TestUtil.ROW7;
-import static org.apache.phoenix.util.TestUtil.ROW8;
-import static org.apache.phoenix.util.TestUtil.ROW9;
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
-import static org.apache.phoenix.util.TestUtil.assertResultSet;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
@@ -40,83 +30,10 @@ import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.Properties;
 
-import org.apache.phoenix.jdbc.PhoenixStatement;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.junit.Test;
 
-
-public class OrderByIT extends ParallelStatsDisabledIT {
-
-@Test
-public void testMultiOrderByExpr() throws Exception {
-String tenantId = getOrganizationId();
-String tableName = initATableValues(tenantId, 
getDefaultSplits(tenantId), getUrl());
-String query = "SELECT entity_id FROM " + tableName + " ORDER BY 
b_string, entity_id";
-Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-Connection conn = DriverManager.getConnection(getUrl(), props);
-try {
-PreparedStatement statement = conn.prepareStatement(query);
-ResultSet rs = statement.executeQuery();
-assertTrue (rs.next());
-assertEquals(ROW1,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW4,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW7,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW2,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW5,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW8,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW3,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW6,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW9,rs.getString(1));
-
-assertFalse(rs.next());
-} finally {
-conn.close();
-}
-}
-
-
-@Test
-public void testDescMultiOrderByExpr() throws Exception {
-String tenantId = getOrganizationId();
-String tableName = initATableValues(tenantId, 
getDefaultSplits(tenantId), getUrl());
-String query = "SELECT entity_id FROM " + tableName + " ORDER BY 
b_string || entity_id desc";
-Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-Connection conn = DriverManager.getConnection(getUrl(), props);
-try {
-PreparedStatement statement = conn.prepareStatement(query);
-ResultSet rs = statement.executeQuery();
-assertTrue (rs.next());
-assertEquals(ROW9,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW6,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW3,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW8,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW5,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW2,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW7,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW4,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW1,rs.getString(1));
-
-assertFalse(rs.next());
-} finally {
-conn.close();
-}
-}
+public class OrderByIT extends BaseOrderByIT {
 
 @Test
 public void testOrderByWithPosition() throws Exception {
@@ -151,8 +68,8 @@ public class OrderByIT extends ParallelStatsDisabledIT {
 assertTrue(rs.next());
 assertEquals(1,rs.getInt(1));
 assertTrue(rs.next());
-assertEquals(1,rs.getInt(1));  
-assertFalse(rs.next());  
+  

[6/6] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-06 Thread tdsilva
PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using 
phoenix-spark


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/6053ee63
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/6053ee63
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/6053ee63

Branch: refs/heads/master
Commit: 6053ee63a4faa3ef5bd1f7cbeb8c9787efcbb41c
Parents: c2d33ed
Author: Thomas D'Silva 
Authored: Thu Oct 18 22:00:01 2018 -0700
Committer: Thomas D'Silva 
Committed: Tue Nov 6 14:53:24 2018 -0800

--
 .../org/apache/phoenix/end2end/AggregateIT.java |  987 +---
 .../apache/phoenix/end2end/BaseAggregateIT.java | 1022 +
 .../apache/phoenix/end2end/BaseOrderByIT.java   |  940 
 .../org/apache/phoenix/end2end/OrderByIT.java   |  943 ++--
 .../end2end/ParallelStatsDisabledIT.java|   40 +
 .../end2end/salted/BaseSaltedTableIT.java   |  474 
 .../phoenix/end2end/salted/SaltedTableIT.java   |  450 +---
 .../org/apache/phoenix/util/QueryBuilder.java   |  211 
 .../java/org/apache/phoenix/util/QueryUtil.java |   38 +-
 .../index/IndexScrutinyTableOutputTest.java |6 +-
 .../util/PhoenixConfigurationUtilTest.java  |6 +-
 .../org/apache/phoenix/util/QueryUtilTest.java  |   10 +-
 phoenix-spark/pom.xml   |8 +
 .../org/apache/phoenix/spark/AggregateIT.java   |   91 ++
 .../org/apache/phoenix/spark/OrderByIT.java |  460 
 .../org/apache/phoenix/spark/SaltedTableIT.java |   53 +
 .../org/apache/phoenix/spark/SparkUtil.java |   87 ++
 .../apache/phoenix/spark/PhoenixSparkIT.scala   |9 +-
 .../apache/phoenix/spark/SparkResultSet.java| 1056 ++
 .../org/apache/phoenix/spark/PhoenixRDD.scala   |   27 +-
 20 files changed, 4649 insertions(+), 2269 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/6053ee63/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java
index 2059311..8916d4d 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java
@@ -18,506 +18,28 @@
 package org.apache.phoenix.end2end;
 
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.assertResultSet;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
-import static org.apache.phoenix.util.TestUtil.assertResultSet;
 
-import java.io.IOException;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
-import java.sql.Statement;
-import java.util.List;
 import java.util.Properties;
 
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.phoenix.compile.QueryPlan;
-import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
-import org.apache.phoenix.jdbc.PhoenixStatement;
-import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.schema.AmbiguousColumnException;
-import org.apache.phoenix.schema.types.PChar;
-import org.apache.phoenix.schema.types.PInteger;
-import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.PropertiesUtil;
-import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.QueryBuilder;
 import org.apache.phoenix.util.TestUtil;
 import org.junit.Test;
 
+public class AggregateIT extends BaseAggregateIT {
 
-public class AggregateIT extends ParallelStatsDisabledIT {
-private static void initData(Connection conn, String tableName) throws 
SQLException {
-conn.createStatement().execute("create table " + tableName +
-"   (id varchar not null primary key,\n" +
-"uri varchar, appcpu integer)");
-insertRow(conn, tableName, "Report1", 10, 1);
-insertRow(conn, tableName, "Report2", 10, 2);
-insertRow(conn, tableName, "Report3", 30, 3);
-insertRow(conn, tableName, "Report4", 30, 4);
-insertRow(conn, tableName, "SOQL1", 10, 5);
-insertRow(conn, tableName, "SOQL2", 10, 6);
-insertRow(conn, tableName, "SOQL3", 30, 7);
-insertRow(conn, tableName, "SOQL4", 30, 8);
-conn.commit();
-}
-
-private static void insertRow(Connection conn, String tableName, String 
uri, int appcpu, int id) throws SQLException {
-PreparedStatement statement 

[2/6] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-06 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/6053ee63/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableIT.java
index c9168f1..69c9869 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableIT.java
@@ -37,104 +37,18 @@ import org.apache.phoenix.util.QueryUtil;
 import org.apache.phoenix.util.SchemaUtil;
 import org.junit.Test;
 
-
 /**
  * Tests for table with transparent salting.
  */
 
-public class SaltedTableIT extends ParallelStatsDisabledIT {
-
-   private static String getUniqueTableName() {
-   return SchemaUtil.getTableName(generateUniqueName(), 
generateUniqueName());
-   }
-   
-private static String initTableValues(byte[][] splits) throws Exception {
-   String tableName = getUniqueTableName();
-Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-Connection conn = DriverManager.getConnection(getUrl(), props);
-
-// Rows we inserted:
-// 1ab123abc111
-// 1abc456abc111
-// 1de123abc111
-// 2abc123def222 
-// 3abc123ghi333
-// 4abc123jkl444
-try {
-// Upsert with no column specifies.
-ensureTableCreated(getUrl(), tableName, TABLE_WITH_SALTING, 
splits, null, null);
-String query = "UPSERT INTO " + tableName + " VALUES(?,?,?,?,?)";
-PreparedStatement stmt = conn.prepareStatement(query);
-stmt.setInt(1, 1);
-stmt.setString(2, "ab");
-stmt.setString(3, "123");
-stmt.setString(4, "abc");
-stmt.setInt(5, 111);
-stmt.execute();
-conn.commit();
-
-stmt.setInt(1, 1);
-stmt.setString(2, "abc");
-stmt.setString(3, "456");
-stmt.setString(4, "abc");
-stmt.setInt(5, 111);
-stmt.execute();
-conn.commit();
-
-// Test upsert when statement explicitly specifies the columns to 
upsert into.
-query = "UPSERT INTO " + tableName +
-" (a_integer, a_string, a_id, b_string, b_integer) " + 
-" VALUES(?,?,?,?,?)";
-stmt = conn.prepareStatement(query);
-
-stmt.setInt(1, 1);
-stmt.setString(2, "de");
-stmt.setString(3, "123");
-stmt.setString(4, "abc");
-stmt.setInt(5, 111);
-stmt.execute();
-conn.commit();
-
-stmt.setInt(1, 2);
-stmt.setString(2, "abc");
-stmt.setString(3, "123");
-stmt.setString(4, "def");
-stmt.setInt(5, 222);
-stmt.execute();
-conn.commit();
-
-// Test upsert when order of column is shuffled.
-query = "UPSERT INTO " + tableName +
-" (a_string, a_integer, a_id, b_string, b_integer) " + 
-" VALUES(?,?,?,?,?)";
-stmt = conn.prepareStatement(query);
-stmt.setString(1, "abc");
-stmt.setInt(2, 3);
-stmt.setString(3, "123");
-stmt.setString(4, "ghi");
-stmt.setInt(5, 333);
-stmt.execute();
-conn.commit();
-
-stmt.setString(1, "abc");
-stmt.setInt(2, 4);
-stmt.setString(3, "123");
-stmt.setString(4, "jkl");
-stmt.setInt(5, 444);
-stmt.execute();
-conn.commit();
-} finally {
-conn.close();
-}
-return tableName;
-}
+public class SaltedTableIT extends BaseSaltedTableIT {
 
 @Test
 public void testTableWithInvalidBucketNumber() throws Exception {
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 Connection conn = DriverManager.getConnection(getUrl(), props);
 try {
-String query = "create table " + getUniqueTableName() + " 
(a_integer integer not null CONSTRAINT pk PRIMARY KEY (a_integer)) SALT_BUCKETS 
= 257";
+String query = "create table " + generateUniqueName() + " 
(a_integer integer not null CONSTRAINT pk PRIMARY KEY (a_integer)) SALT_BUCKETS 
= 257";
 PreparedStatement stmt = conn.prepareStatement(query);
 stmt.execute();
 fail("Should have caught exception");
@@ -148,370 +62,12 @@ public class SaltedTableIT extends 
ParallelStatsDisabledIT {
 @Test
 public void testTableWithSplit() throws Exception {
 try {
-createTestTable(getUrl(), "create table " + 

[4/6] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-06 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/6053ee63/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseOrderByIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseOrderByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseOrderByIT.java
new file mode 100644
index 000..31bf050
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseOrderByIT.java
@@ -0,0 +1,940 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.TestUtil.ROW1;
+import static org.apache.phoenix.util.TestUtil.ROW2;
+import static org.apache.phoenix.util.TestUtil.ROW3;
+import static org.apache.phoenix.util.TestUtil.ROW4;
+import static org.apache.phoenix.util.TestUtil.ROW5;
+import static org.apache.phoenix.util.TestUtil.ROW6;
+import static org.apache.phoenix.util.TestUtil.ROW7;
+import static org.apache.phoenix.util.TestUtil.ROW8;
+import static org.apache.phoenix.util.TestUtil.ROW9;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.assertResultSet;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.util.Properties;
+
+import com.google.common.collect.Lists;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryBuilder;
+import org.junit.Test;
+
+
+public abstract class BaseOrderByIT extends ParallelStatsDisabledIT {
+
+@Test
+public void testMultiOrderByExpr() throws Exception {
+String tenantId = getOrganizationId();
+String tableName = initATableValues(tenantId, 
getDefaultSplits(tenantId), getUrl());
+QueryBuilder queryBuilder = new QueryBuilder()
+.setSelectColumns(
+Lists.newArrayList("ENTITY_ID", "B_STRING"))
+.setFullTableName(tableName)
+.setOrderByClause("B_STRING, ENTITY_ID");
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+ResultSet rs = executeQuery(conn, queryBuilder);
+assertTrue (rs.next());
+assertEquals(ROW1,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW4,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW7,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW2,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW5,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW8,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW3,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW6,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW9,rs.getString(1));
+
+assertFalse(rs.next());
+}
+}
+
+
+@Test
+public void testDescMultiOrderByExpr() throws Exception {
+String tenantId = getOrganizationId();
+String tableName = initATableValues(tenantId, 
getDefaultSplits(tenantId), getUrl());
+QueryBuilder queryBuilder = new QueryBuilder()
+.setSelectColumns(
+Lists.newArrayList("ENTITY_ID", "B_STRING"))
+.setFullTableName(tableName)
+.setOrderByClause("B_STRING || ENTITY_ID DESC");
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+ResultSet rs = executeQuery(conn, queryBuilder);
+assertTrue (rs.next());
+assertEquals(ROW9,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW6,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW3,rs.getString(1));
+assertTrue 

[1/6] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-06 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/master c2d33ed38 -> 6053ee63a


http://git-wip-us.apache.org/repos/asf/phoenix/blob/6053ee63/phoenix-spark/src/main/java/org/apache/phoenix/spark/SparkResultSet.java
--
diff --git 
a/phoenix-spark/src/main/java/org/apache/phoenix/spark/SparkResultSet.java 
b/phoenix-spark/src/main/java/org/apache/phoenix/spark/SparkResultSet.java
new file mode 100644
index 000..0cb8009
--- /dev/null
+++ b/phoenix-spark/src/main/java/org/apache/phoenix/spark/SparkResultSet.java
@@ -0,0 +1,1056 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.spark;
+
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.exception.SQLExceptionInfo;
+import org.apache.phoenix.util.SQLCloseable;
+import org.apache.spark.sql.Row;
+
+import java.io.InputStream;
+import java.io.Reader;
+import java.math.BigDecimal;
+import java.net.MalformedURLException;
+import java.net.URL;
+import java.sql.Array;
+import java.sql.Blob;
+import java.sql.Clob;
+import java.sql.Date;
+import java.sql.NClob;
+import java.sql.Ref;
+import java.sql.ResultSet;
+import java.sql.ResultSetMetaData;
+import java.sql.RowId;
+import java.sql.SQLException;
+import java.sql.SQLFeatureNotSupportedException;
+import java.sql.SQLWarning;
+import java.sql.SQLXML;
+import java.sql.Statement;
+import java.sql.Time;
+import java.sql.Timestamp;
+import java.util.Arrays;
+import java.util.Calendar;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Helper class to convert a List of Rows returned from a dataset to a sql 
ResultSet
+ */
+public class SparkResultSet implements ResultSet, SQLCloseable {
+
+private int index = -1;
+private List dataSetRows;
+private List columnNames;
+private boolean wasNull = false;
+
+public SparkResultSet(List rows, String[] columnNames) {
+this.dataSetRows = rows;
+this.columnNames = Arrays.asList(columnNames);
+}
+
+private Row getCurrentRow() {
+return dataSetRows.get(index);
+}
+
+@Override
+public boolean absolute(int row) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public void afterLast() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public void beforeFirst() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public void cancelRowUpdates() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public void clearWarnings() throws SQLException {
+}
+
+@Override
+public void close() throws SQLException {
+}
+
+@Override
+public void deleteRow() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public int findColumn(String columnLabel) throws SQLException {
+return columnNames.indexOf(columnLabel.toUpperCase())+1;
+}
+
+@Override
+public boolean first() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public Array getArray(int columnIndex) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public Array getArray(String columnLabel) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public InputStream getAsciiStream(int columnIndex) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public InputStream getAsciiStream(String columnLabel) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+private void checkOpen() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+private void checkCursorState() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public BigDecimal getBigDecimal(int columnIndex) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+ 

[5/6] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-06 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/6053ee63/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseAggregateIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseAggregateIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseAggregateIT.java
new file mode 100644
index 000..5b466df
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseAggregateIT.java
@@ -0,0 +1,1022 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+import static org.apache.phoenix.util.TestUtil.assertResultSet;
+
+import java.io.IOException;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.List;
+import java.util.Properties;
+
+import com.google.common.collect.Lists;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.compile.QueryPlan;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
+import org.apache.phoenix.jdbc.PhoenixStatement;
+import org.apache.phoenix.query.KeyRange;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.AmbiguousColumnException;
+import org.apache.phoenix.schema.types.PChar;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.ByteUtil;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryBuilder;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.TestUtil;
+import org.junit.Test;
+
+
+public abstract class BaseAggregateIT extends ParallelStatsDisabledIT {
+
+private static void initData(Connection conn, String tableName) throws 
SQLException {
+conn.createStatement().execute("create table " + tableName +
+"   (id varchar not null primary key,\n" +
+"uri varchar, appcpu integer)");
+insertRow(conn, tableName, "Report1", 10, 1);
+insertRow(conn, tableName, "Report2", 10, 2);
+insertRow(conn, tableName, "Report3", 30, 3);
+insertRow(conn, tableName, "Report4", 30, 4);
+insertRow(conn, tableName, "SOQL1", 10, 5);
+insertRow(conn, tableName, "SOQL2", 10, 6);
+insertRow(conn, tableName, "SOQL3", 30, 7);
+insertRow(conn, tableName, "SOQL4", 30, 8);
+conn.commit();
+}
+
+private static void insertRow(Connection conn, String tableName, String 
uri, int appcpu, int id) throws SQLException {
+PreparedStatement statement = conn.prepareStatement("UPSERT INTO " + 
tableName + "(id, uri, appcpu) values (?,?,?)");
+statement.setString(1, "id" + id);
+statement.setString(2, uri);
+statement.setInt(3, appcpu);
+statement.executeUpdate();
+}
+
+@Test
+public void testDuplicateTrailingAggExpr() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+props.put(QueryServices.FORCE_ROW_KEY_ORDER_ATTRIB, 
Boolean.FALSE.toString());
+Connection conn = DriverManager.getConnection(getUrl(), props);
+String tableName = generateUniqueName();
+
+conn.createStatement().execute("create table " + tableName +
+"   (nam VARCHAR(20), address VARCHAR(20), id BIGINT "
++ "constraint my_pk primary key (id))");
+PreparedStatement statement = conn.prepareStatement("UPSERT INTO " + 
tableName + "(nam, address, id) values (?,?,?)");
+statement.setString(1, "pulkit");
+statement.setString(2, "badaun");
+statement.setInt(3, 1);
+statement.executeUpdate();
+conn.commit();
+
+QueryBuilder queryBuilder = new QueryBuilder()
+.setDistinct(true)
+.setSelectExpression("'harshit' as 

[4/6] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-06 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/51d38d7f/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseOrderByIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseOrderByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseOrderByIT.java
new file mode 100644
index 000..31bf050
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseOrderByIT.java
@@ -0,0 +1,940 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.TestUtil.ROW1;
+import static org.apache.phoenix.util.TestUtil.ROW2;
+import static org.apache.phoenix.util.TestUtil.ROW3;
+import static org.apache.phoenix.util.TestUtil.ROW4;
+import static org.apache.phoenix.util.TestUtil.ROW5;
+import static org.apache.phoenix.util.TestUtil.ROW6;
+import static org.apache.phoenix.util.TestUtil.ROW7;
+import static org.apache.phoenix.util.TestUtil.ROW8;
+import static org.apache.phoenix.util.TestUtil.ROW9;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.assertResultSet;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.util.Properties;
+
+import com.google.common.collect.Lists;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryBuilder;
+import org.junit.Test;
+
+
+public abstract class BaseOrderByIT extends ParallelStatsDisabledIT {
+
+@Test
+public void testMultiOrderByExpr() throws Exception {
+String tenantId = getOrganizationId();
+String tableName = initATableValues(tenantId, 
getDefaultSplits(tenantId), getUrl());
+QueryBuilder queryBuilder = new QueryBuilder()
+.setSelectColumns(
+Lists.newArrayList("ENTITY_ID", "B_STRING"))
+.setFullTableName(tableName)
+.setOrderByClause("B_STRING, ENTITY_ID");
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+ResultSet rs = executeQuery(conn, queryBuilder);
+assertTrue (rs.next());
+assertEquals(ROW1,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW4,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW7,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW2,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW5,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW8,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW3,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW6,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW9,rs.getString(1));
+
+assertFalse(rs.next());
+}
+}
+
+
+@Test
+public void testDescMultiOrderByExpr() throws Exception {
+String tenantId = getOrganizationId();
+String tableName = initATableValues(tenantId, 
getDefaultSplits(tenantId), getUrl());
+QueryBuilder queryBuilder = new QueryBuilder()
+.setSelectColumns(
+Lists.newArrayList("ENTITY_ID", "B_STRING"))
+.setFullTableName(tableName)
+.setOrderByClause("B_STRING || ENTITY_ID DESC");
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+ResultSet rs = executeQuery(conn, queryBuilder);
+assertTrue (rs.next());
+assertEquals(ROW9,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW6,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW3,rs.getString(1));
+assertTrue 

[2/6] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-06 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/51d38d7f/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableIT.java
index c9168f1..69c9869 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableIT.java
@@ -37,104 +37,18 @@ import org.apache.phoenix.util.QueryUtil;
 import org.apache.phoenix.util.SchemaUtil;
 import org.junit.Test;
 
-
 /**
  * Tests for table with transparent salting.
  */
 
-public class SaltedTableIT extends ParallelStatsDisabledIT {
-
-   private static String getUniqueTableName() {
-   return SchemaUtil.getTableName(generateUniqueName(), 
generateUniqueName());
-   }
-   
-private static String initTableValues(byte[][] splits) throws Exception {
-   String tableName = getUniqueTableName();
-Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-Connection conn = DriverManager.getConnection(getUrl(), props);
-
-// Rows we inserted:
-// 1ab123abc111
-// 1abc456abc111
-// 1de123abc111
-// 2abc123def222 
-// 3abc123ghi333
-// 4abc123jkl444
-try {
-// Upsert with no column specifies.
-ensureTableCreated(getUrl(), tableName, TABLE_WITH_SALTING, 
splits, null, null);
-String query = "UPSERT INTO " + tableName + " VALUES(?,?,?,?,?)";
-PreparedStatement stmt = conn.prepareStatement(query);
-stmt.setInt(1, 1);
-stmt.setString(2, "ab");
-stmt.setString(3, "123");
-stmt.setString(4, "abc");
-stmt.setInt(5, 111);
-stmt.execute();
-conn.commit();
-
-stmt.setInt(1, 1);
-stmt.setString(2, "abc");
-stmt.setString(3, "456");
-stmt.setString(4, "abc");
-stmt.setInt(5, 111);
-stmt.execute();
-conn.commit();
-
-// Test upsert when statement explicitly specifies the columns to 
upsert into.
-query = "UPSERT INTO " + tableName +
-" (a_integer, a_string, a_id, b_string, b_integer) " + 
-" VALUES(?,?,?,?,?)";
-stmt = conn.prepareStatement(query);
-
-stmt.setInt(1, 1);
-stmt.setString(2, "de");
-stmt.setString(3, "123");
-stmt.setString(4, "abc");
-stmt.setInt(5, 111);
-stmt.execute();
-conn.commit();
-
-stmt.setInt(1, 2);
-stmt.setString(2, "abc");
-stmt.setString(3, "123");
-stmt.setString(4, "def");
-stmt.setInt(5, 222);
-stmt.execute();
-conn.commit();
-
-// Test upsert when order of column is shuffled.
-query = "UPSERT INTO " + tableName +
-" (a_string, a_integer, a_id, b_string, b_integer) " + 
-" VALUES(?,?,?,?,?)";
-stmt = conn.prepareStatement(query);
-stmt.setString(1, "abc");
-stmt.setInt(2, 3);
-stmt.setString(3, "123");
-stmt.setString(4, "ghi");
-stmt.setInt(5, 333);
-stmt.execute();
-conn.commit();
-
-stmt.setString(1, "abc");
-stmt.setInt(2, 4);
-stmt.setString(3, "123");
-stmt.setString(4, "jkl");
-stmt.setInt(5, 444);
-stmt.execute();
-conn.commit();
-} finally {
-conn.close();
-}
-return tableName;
-}
+public class SaltedTableIT extends BaseSaltedTableIT {
 
 @Test
 public void testTableWithInvalidBucketNumber() throws Exception {
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 Connection conn = DriverManager.getConnection(getUrl(), props);
 try {
-String query = "create table " + getUniqueTableName() + " 
(a_integer integer not null CONSTRAINT pk PRIMARY KEY (a_integer)) SALT_BUCKETS 
= 257";
+String query = "create table " + generateUniqueName() + " 
(a_integer integer not null CONSTRAINT pk PRIMARY KEY (a_integer)) SALT_BUCKETS 
= 257";
 PreparedStatement stmt = conn.prepareStatement(query);
 stmt.execute();
 fail("Should have caught exception");
@@ -148,370 +62,12 @@ public class SaltedTableIT extends 
ParallelStatsDisabledIT {
 @Test
 public void testTableWithSplit() throws Exception {
 try {
-createTestTable(getUrl(), "create table " + 

[6/6] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-06 Thread tdsilva
PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using 
phoenix-spark


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/51d38d7f
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/51d38d7f
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/51d38d7f

Branch: refs/heads/4.x-HBase-1.4
Commit: 51d38d7fc1978468670053c980a4a3a866f52287
Parents: 8ccf69f
Author: Thomas D'Silva 
Authored: Thu Oct 18 22:00:01 2018 -0700
Committer: Thomas D'Silva 
Committed: Tue Nov 6 14:52:26 2018 -0800

--
 .../org/apache/phoenix/end2end/AggregateIT.java |  987 +---
 .../apache/phoenix/end2end/BaseAggregateIT.java | 1022 +
 .../apache/phoenix/end2end/BaseOrderByIT.java   |  940 
 .../org/apache/phoenix/end2end/OrderByIT.java   |  943 ++--
 .../end2end/ParallelStatsDisabledIT.java|   40 +
 .../end2end/salted/BaseSaltedTableIT.java   |  474 
 .../phoenix/end2end/salted/SaltedTableIT.java   |  450 +---
 .../org/apache/phoenix/util/QueryBuilder.java   |  211 
 .../java/org/apache/phoenix/util/QueryUtil.java |   38 +-
 .../index/IndexScrutinyTableOutputTest.java |6 +-
 .../util/PhoenixConfigurationUtilTest.java  |6 +-
 .../org/apache/phoenix/util/QueryUtilTest.java  |   10 +-
 phoenix-spark/pom.xml   |8 +
 .../org/apache/phoenix/spark/AggregateIT.java   |   91 ++
 .../org/apache/phoenix/spark/OrderByIT.java |  460 
 .../org/apache/phoenix/spark/SaltedTableIT.java |   53 +
 .../org/apache/phoenix/spark/SparkUtil.java |   87 ++
 .../apache/phoenix/spark/PhoenixSparkIT.scala   |9 +-
 .../apache/phoenix/spark/SparkResultSet.java| 1056 ++
 .../org/apache/phoenix/spark/PhoenixRDD.scala   |   27 +-
 pom.xml |2 +-
 21 files changed, 4650 insertions(+), 2270 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/51d38d7f/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java
index 2059311..8916d4d 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java
@@ -18,506 +18,28 @@
 package org.apache.phoenix.end2end;
 
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.assertResultSet;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
-import static org.apache.phoenix.util.TestUtil.assertResultSet;
 
-import java.io.IOException;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
-import java.sql.Statement;
-import java.util.List;
 import java.util.Properties;
 
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.phoenix.compile.QueryPlan;
-import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
-import org.apache.phoenix.jdbc.PhoenixStatement;
-import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.schema.AmbiguousColumnException;
-import org.apache.phoenix.schema.types.PChar;
-import org.apache.phoenix.schema.types.PInteger;
-import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.PropertiesUtil;
-import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.QueryBuilder;
 import org.apache.phoenix.util.TestUtil;
 import org.junit.Test;
 
+public class AggregateIT extends BaseAggregateIT {
 
-public class AggregateIT extends ParallelStatsDisabledIT {
-private static void initData(Connection conn, String tableName) throws 
SQLException {
-conn.createStatement().execute("create table " + tableName +
-"   (id varchar not null primary key,\n" +
-"uri varchar, appcpu integer)");
-insertRow(conn, tableName, "Report1", 10, 1);
-insertRow(conn, tableName, "Report2", 10, 2);
-insertRow(conn, tableName, "Report3", 30, 3);
-insertRow(conn, tableName, "Report4", 30, 4);
-insertRow(conn, tableName, "SOQL1", 10, 5);
-insertRow(conn, tableName, "SOQL2", 10, 6);
-insertRow(conn, tableName, "SOQL3", 30, 7);
-insertRow(conn, tableName, "SOQL4", 30, 8);
-conn.commit();
-}
-
-private static void insertRow(Connection conn, String tableName, String 
uri, int appcpu, 

[2/6] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-06 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/9bfaf183/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableIT.java
index c9168f1..69c9869 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableIT.java
@@ -37,104 +37,18 @@ import org.apache.phoenix.util.QueryUtil;
 import org.apache.phoenix.util.SchemaUtil;
 import org.junit.Test;
 
-
 /**
  * Tests for table with transparent salting.
  */
 
-public class SaltedTableIT extends ParallelStatsDisabledIT {
-
-   private static String getUniqueTableName() {
-   return SchemaUtil.getTableName(generateUniqueName(), 
generateUniqueName());
-   }
-   
-private static String initTableValues(byte[][] splits) throws Exception {
-   String tableName = getUniqueTableName();
-Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-Connection conn = DriverManager.getConnection(getUrl(), props);
-
-// Rows we inserted:
-// 1ab123abc111
-// 1abc456abc111
-// 1de123abc111
-// 2abc123def222 
-// 3abc123ghi333
-// 4abc123jkl444
-try {
-// Upsert with no column specifies.
-ensureTableCreated(getUrl(), tableName, TABLE_WITH_SALTING, 
splits, null, null);
-String query = "UPSERT INTO " + tableName + " VALUES(?,?,?,?,?)";
-PreparedStatement stmt = conn.prepareStatement(query);
-stmt.setInt(1, 1);
-stmt.setString(2, "ab");
-stmt.setString(3, "123");
-stmt.setString(4, "abc");
-stmt.setInt(5, 111);
-stmt.execute();
-conn.commit();
-
-stmt.setInt(1, 1);
-stmt.setString(2, "abc");
-stmt.setString(3, "456");
-stmt.setString(4, "abc");
-stmt.setInt(5, 111);
-stmt.execute();
-conn.commit();
-
-// Test upsert when statement explicitly specifies the columns to 
upsert into.
-query = "UPSERT INTO " + tableName +
-" (a_integer, a_string, a_id, b_string, b_integer) " + 
-" VALUES(?,?,?,?,?)";
-stmt = conn.prepareStatement(query);
-
-stmt.setInt(1, 1);
-stmt.setString(2, "de");
-stmt.setString(3, "123");
-stmt.setString(4, "abc");
-stmt.setInt(5, 111);
-stmt.execute();
-conn.commit();
-
-stmt.setInt(1, 2);
-stmt.setString(2, "abc");
-stmt.setString(3, "123");
-stmt.setString(4, "def");
-stmt.setInt(5, 222);
-stmt.execute();
-conn.commit();
-
-// Test upsert when order of column is shuffled.
-query = "UPSERT INTO " + tableName +
-" (a_string, a_integer, a_id, b_string, b_integer) " + 
-" VALUES(?,?,?,?,?)";
-stmt = conn.prepareStatement(query);
-stmt.setString(1, "abc");
-stmt.setInt(2, 3);
-stmt.setString(3, "123");
-stmt.setString(4, "ghi");
-stmt.setInt(5, 333);
-stmt.execute();
-conn.commit();
-
-stmt.setString(1, "abc");
-stmt.setInt(2, 4);
-stmt.setString(3, "123");
-stmt.setString(4, "jkl");
-stmt.setInt(5, 444);
-stmt.execute();
-conn.commit();
-} finally {
-conn.close();
-}
-return tableName;
-}
+public class SaltedTableIT extends BaseSaltedTableIT {
 
 @Test
 public void testTableWithInvalidBucketNumber() throws Exception {
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 Connection conn = DriverManager.getConnection(getUrl(), props);
 try {
-String query = "create table " + getUniqueTableName() + " 
(a_integer integer not null CONSTRAINT pk PRIMARY KEY (a_integer)) SALT_BUCKETS 
= 257";
+String query = "create table " + generateUniqueName() + " 
(a_integer integer not null CONSTRAINT pk PRIMARY KEY (a_integer)) SALT_BUCKETS 
= 257";
 PreparedStatement stmt = conn.prepareStatement(query);
 stmt.execute();
 fail("Should have caught exception");
@@ -148,370 +62,12 @@ public class SaltedTableIT extends 
ParallelStatsDisabledIT {
 @Test
 public void testTableWithSplit() throws Exception {
 try {
-createTestTable(getUrl(), "create table " + 

[4/6] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-06 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/9bfaf183/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseOrderByIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseOrderByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseOrderByIT.java
new file mode 100644
index 000..31bf050
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseOrderByIT.java
@@ -0,0 +1,940 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.TestUtil.ROW1;
+import static org.apache.phoenix.util.TestUtil.ROW2;
+import static org.apache.phoenix.util.TestUtil.ROW3;
+import static org.apache.phoenix.util.TestUtil.ROW4;
+import static org.apache.phoenix.util.TestUtil.ROW5;
+import static org.apache.phoenix.util.TestUtil.ROW6;
+import static org.apache.phoenix.util.TestUtil.ROW7;
+import static org.apache.phoenix.util.TestUtil.ROW8;
+import static org.apache.phoenix.util.TestUtil.ROW9;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.assertResultSet;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.util.Properties;
+
+import com.google.common.collect.Lists;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryBuilder;
+import org.junit.Test;
+
+
+public abstract class BaseOrderByIT extends ParallelStatsDisabledIT {
+
+@Test
+public void testMultiOrderByExpr() throws Exception {
+String tenantId = getOrganizationId();
+String tableName = initATableValues(tenantId, 
getDefaultSplits(tenantId), getUrl());
+QueryBuilder queryBuilder = new QueryBuilder()
+.setSelectColumns(
+Lists.newArrayList("ENTITY_ID", "B_STRING"))
+.setFullTableName(tableName)
+.setOrderByClause("B_STRING, ENTITY_ID");
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+ResultSet rs = executeQuery(conn, queryBuilder);
+assertTrue (rs.next());
+assertEquals(ROW1,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW4,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW7,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW2,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW5,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW8,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW3,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW6,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW9,rs.getString(1));
+
+assertFalse(rs.next());
+}
+}
+
+
+@Test
+public void testDescMultiOrderByExpr() throws Exception {
+String tenantId = getOrganizationId();
+String tableName = initATableValues(tenantId, 
getDefaultSplits(tenantId), getUrl());
+QueryBuilder queryBuilder = new QueryBuilder()
+.setSelectColumns(
+Lists.newArrayList("ENTITY_ID", "B_STRING"))
+.setFullTableName(tableName)
+.setOrderByClause("B_STRING || ENTITY_ID DESC");
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+ResultSet rs = executeQuery(conn, queryBuilder);
+assertTrue (rs.next());
+assertEquals(ROW9,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW6,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW3,rs.getString(1));
+assertTrue 

[5/6] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-06 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/51d38d7f/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseAggregateIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseAggregateIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseAggregateIT.java
new file mode 100644
index 000..5b466df
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseAggregateIT.java
@@ -0,0 +1,1022 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+import static org.apache.phoenix.util.TestUtil.assertResultSet;
+
+import java.io.IOException;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.List;
+import java.util.Properties;
+
+import com.google.common.collect.Lists;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.compile.QueryPlan;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
+import org.apache.phoenix.jdbc.PhoenixStatement;
+import org.apache.phoenix.query.KeyRange;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.AmbiguousColumnException;
+import org.apache.phoenix.schema.types.PChar;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.ByteUtil;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryBuilder;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.TestUtil;
+import org.junit.Test;
+
+
+public abstract class BaseAggregateIT extends ParallelStatsDisabledIT {
+
+private static void initData(Connection conn, String tableName) throws 
SQLException {
+conn.createStatement().execute("create table " + tableName +
+"   (id varchar not null primary key,\n" +
+"uri varchar, appcpu integer)");
+insertRow(conn, tableName, "Report1", 10, 1);
+insertRow(conn, tableName, "Report2", 10, 2);
+insertRow(conn, tableName, "Report3", 30, 3);
+insertRow(conn, tableName, "Report4", 30, 4);
+insertRow(conn, tableName, "SOQL1", 10, 5);
+insertRow(conn, tableName, "SOQL2", 10, 6);
+insertRow(conn, tableName, "SOQL3", 30, 7);
+insertRow(conn, tableName, "SOQL4", 30, 8);
+conn.commit();
+}
+
+private static void insertRow(Connection conn, String tableName, String 
uri, int appcpu, int id) throws SQLException {
+PreparedStatement statement = conn.prepareStatement("UPSERT INTO " + 
tableName + "(id, uri, appcpu) values (?,?,?)");
+statement.setString(1, "id" + id);
+statement.setString(2, uri);
+statement.setInt(3, appcpu);
+statement.executeUpdate();
+}
+
+@Test
+public void testDuplicateTrailingAggExpr() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+props.put(QueryServices.FORCE_ROW_KEY_ORDER_ATTRIB, 
Boolean.FALSE.toString());
+Connection conn = DriverManager.getConnection(getUrl(), props);
+String tableName = generateUniqueName();
+
+conn.createStatement().execute("create table " + tableName +
+"   (nam VARCHAR(20), address VARCHAR(20), id BIGINT "
++ "constraint my_pk primary key (id))");
+PreparedStatement statement = conn.prepareStatement("UPSERT INTO " + 
tableName + "(nam, address, id) values (?,?,?)");
+statement.setString(1, "pulkit");
+statement.setString(2, "badaun");
+statement.setInt(3, 1);
+statement.executeUpdate();
+conn.commit();
+
+QueryBuilder queryBuilder = new QueryBuilder()
+.setDistinct(true)
+.setSelectExpression("'harshit' as 

[3/6] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-06 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/51d38d7f/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
index 578a3af..792d08f 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
@@ -17,17 +17,7 @@
  */
 package org.apache.phoenix.end2end;
 
-import static org.apache.phoenix.util.TestUtil.ROW1;
-import static org.apache.phoenix.util.TestUtil.ROW2;
-import static org.apache.phoenix.util.TestUtil.ROW3;
-import static org.apache.phoenix.util.TestUtil.ROW4;
-import static org.apache.phoenix.util.TestUtil.ROW5;
-import static org.apache.phoenix.util.TestUtil.ROW6;
-import static org.apache.phoenix.util.TestUtil.ROW7;
-import static org.apache.phoenix.util.TestUtil.ROW8;
-import static org.apache.phoenix.util.TestUtil.ROW9;
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
-import static org.apache.phoenix.util.TestUtil.assertResultSet;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
@@ -40,83 +30,10 @@ import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.Properties;
 
-import org.apache.phoenix.jdbc.PhoenixStatement;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.junit.Test;
 
-
-public class OrderByIT extends ParallelStatsDisabledIT {
-
-@Test
-public void testMultiOrderByExpr() throws Exception {
-String tenantId = getOrganizationId();
-String tableName = initATableValues(tenantId, 
getDefaultSplits(tenantId), getUrl());
-String query = "SELECT entity_id FROM " + tableName + " ORDER BY 
b_string, entity_id";
-Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-Connection conn = DriverManager.getConnection(getUrl(), props);
-try {
-PreparedStatement statement = conn.prepareStatement(query);
-ResultSet rs = statement.executeQuery();
-assertTrue (rs.next());
-assertEquals(ROW1,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW4,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW7,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW2,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW5,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW8,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW3,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW6,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW9,rs.getString(1));
-
-assertFalse(rs.next());
-} finally {
-conn.close();
-}
-}
-
-
-@Test
-public void testDescMultiOrderByExpr() throws Exception {
-String tenantId = getOrganizationId();
-String tableName = initATableValues(tenantId, 
getDefaultSplits(tenantId), getUrl());
-String query = "SELECT entity_id FROM " + tableName + " ORDER BY 
b_string || entity_id desc";
-Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-Connection conn = DriverManager.getConnection(getUrl(), props);
-try {
-PreparedStatement statement = conn.prepareStatement(query);
-ResultSet rs = statement.executeQuery();
-assertTrue (rs.next());
-assertEquals(ROW9,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW6,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW3,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW8,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW5,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW2,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW7,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW4,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW1,rs.getString(1));
-
-assertFalse(rs.next());
-} finally {
-conn.close();
-}
-}
+public class OrderByIT extends BaseOrderByIT {
 
 @Test
 public void testOrderByWithPosition() throws Exception {
@@ -151,8 +68,8 @@ public class OrderByIT extends ParallelStatsDisabledIT {
 assertTrue(rs.next());
 assertEquals(1,rs.getInt(1));
 assertTrue(rs.next());
-assertEquals(1,rs.getInt(1));  
-assertFalse(rs.next());  
+  

[6/6] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-06 Thread tdsilva
PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using 
phoenix-spark


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/9bfaf183
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/9bfaf183
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/9bfaf183

Branch: refs/heads/4.x-HBase-1.3
Commit: 9bfaf183a3d092bce6b2dfcacd77ba46aa1b078b
Parents: 1b2a3d5
Author: Thomas D'Silva 
Authored: Thu Oct 18 22:00:01 2018 -0700
Committer: Thomas D'Silva 
Committed: Tue Nov 6 14:51:13 2018 -0800

--
 .../org/apache/phoenix/end2end/AggregateIT.java |  987 +---
 .../apache/phoenix/end2end/BaseAggregateIT.java | 1022 +
 .../apache/phoenix/end2end/BaseOrderByIT.java   |  940 
 .../org/apache/phoenix/end2end/OrderByIT.java   |  943 ++--
 .../end2end/ParallelStatsDisabledIT.java|   40 +
 .../end2end/salted/BaseSaltedTableIT.java   |  474 
 .../phoenix/end2end/salted/SaltedTableIT.java   |  450 +---
 .../org/apache/phoenix/util/QueryBuilder.java   |  211 
 .../java/org/apache/phoenix/util/QueryUtil.java |   38 +-
 .../index/IndexScrutinyTableOutputTest.java |6 +-
 .../util/PhoenixConfigurationUtilTest.java  |6 +-
 .../org/apache/phoenix/util/QueryUtilTest.java  |   10 +-
 phoenix-spark/pom.xml   |8 +
 .../org/apache/phoenix/spark/AggregateIT.java   |   91 ++
 .../org/apache/phoenix/spark/OrderByIT.java |  460 
 .../org/apache/phoenix/spark/SaltedTableIT.java |   53 +
 .../org/apache/phoenix/spark/SparkUtil.java |   87 ++
 .../apache/phoenix/spark/PhoenixSparkIT.scala   |9 +-
 .../apache/phoenix/spark/SparkResultSet.java| 1056 ++
 .../org/apache/phoenix/spark/PhoenixRDD.scala   |   27 +-
 pom.xml |2 +-
 21 files changed, 4650 insertions(+), 2270 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/9bfaf183/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java
index 2059311..8916d4d 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java
@@ -18,506 +18,28 @@
 package org.apache.phoenix.end2end;
 
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.assertResultSet;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
-import static org.apache.phoenix.util.TestUtil.assertResultSet;
 
-import java.io.IOException;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
-import java.sql.Statement;
-import java.util.List;
 import java.util.Properties;
 
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.phoenix.compile.QueryPlan;
-import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
-import org.apache.phoenix.jdbc.PhoenixStatement;
-import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.schema.AmbiguousColumnException;
-import org.apache.phoenix.schema.types.PChar;
-import org.apache.phoenix.schema.types.PInteger;
-import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.PropertiesUtil;
-import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.QueryBuilder;
 import org.apache.phoenix.util.TestUtil;
 import org.junit.Test;
 
+public class AggregateIT extends BaseAggregateIT {
 
-public class AggregateIT extends ParallelStatsDisabledIT {
-private static void initData(Connection conn, String tableName) throws 
SQLException {
-conn.createStatement().execute("create table " + tableName +
-"   (id varchar not null primary key,\n" +
-"uri varchar, appcpu integer)");
-insertRow(conn, tableName, "Report1", 10, 1);
-insertRow(conn, tableName, "Report2", 10, 2);
-insertRow(conn, tableName, "Report3", 30, 3);
-insertRow(conn, tableName, "Report4", 30, 4);
-insertRow(conn, tableName, "SOQL1", 10, 5);
-insertRow(conn, tableName, "SOQL2", 10, 6);
-insertRow(conn, tableName, "SOQL3", 30, 7);
-insertRow(conn, tableName, "SOQL4", 30, 8);
-conn.commit();
-}
-
-private static void insertRow(Connection conn, String tableName, String 
uri, int appcpu, 

[1/6] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-06 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.4 8ccf69f00 -> 51d38d7fc


http://git-wip-us.apache.org/repos/asf/phoenix/blob/51d38d7f/phoenix-spark/src/main/java/org/apache/phoenix/spark/SparkResultSet.java
--
diff --git 
a/phoenix-spark/src/main/java/org/apache/phoenix/spark/SparkResultSet.java 
b/phoenix-spark/src/main/java/org/apache/phoenix/spark/SparkResultSet.java
new file mode 100644
index 000..0cb8009
--- /dev/null
+++ b/phoenix-spark/src/main/java/org/apache/phoenix/spark/SparkResultSet.java
@@ -0,0 +1,1056 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.spark;
+
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.exception.SQLExceptionInfo;
+import org.apache.phoenix.util.SQLCloseable;
+import org.apache.spark.sql.Row;
+
+import java.io.InputStream;
+import java.io.Reader;
+import java.math.BigDecimal;
+import java.net.MalformedURLException;
+import java.net.URL;
+import java.sql.Array;
+import java.sql.Blob;
+import java.sql.Clob;
+import java.sql.Date;
+import java.sql.NClob;
+import java.sql.Ref;
+import java.sql.ResultSet;
+import java.sql.ResultSetMetaData;
+import java.sql.RowId;
+import java.sql.SQLException;
+import java.sql.SQLFeatureNotSupportedException;
+import java.sql.SQLWarning;
+import java.sql.SQLXML;
+import java.sql.Statement;
+import java.sql.Time;
+import java.sql.Timestamp;
+import java.util.Arrays;
+import java.util.Calendar;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Helper class to convert a List of Rows returned from a dataset to a sql 
ResultSet
+ */
+public class SparkResultSet implements ResultSet, SQLCloseable {
+
+private int index = -1;
+private List dataSetRows;
+private List columnNames;
+private boolean wasNull = false;
+
+public SparkResultSet(List rows, String[] columnNames) {
+this.dataSetRows = rows;
+this.columnNames = Arrays.asList(columnNames);
+}
+
+private Row getCurrentRow() {
+return dataSetRows.get(index);
+}
+
+@Override
+public boolean absolute(int row) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public void afterLast() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public void beforeFirst() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public void cancelRowUpdates() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public void clearWarnings() throws SQLException {
+}
+
+@Override
+public void close() throws SQLException {
+}
+
+@Override
+public void deleteRow() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public int findColumn(String columnLabel) throws SQLException {
+return columnNames.indexOf(columnLabel.toUpperCase())+1;
+}
+
+@Override
+public boolean first() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public Array getArray(int columnIndex) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public Array getArray(String columnLabel) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public InputStream getAsciiStream(int columnIndex) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public InputStream getAsciiStream(String columnLabel) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+private void checkOpen() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+private void checkCursorState() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public BigDecimal getBigDecimal(int columnIndex) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+

[5/6] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-06 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/9bfaf183/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseAggregateIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseAggregateIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseAggregateIT.java
new file mode 100644
index 000..5b466df
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseAggregateIT.java
@@ -0,0 +1,1022 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+import static org.apache.phoenix.util.TestUtil.assertResultSet;
+
+import java.io.IOException;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.List;
+import java.util.Properties;
+
+import com.google.common.collect.Lists;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.compile.QueryPlan;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
+import org.apache.phoenix.jdbc.PhoenixStatement;
+import org.apache.phoenix.query.KeyRange;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.AmbiguousColumnException;
+import org.apache.phoenix.schema.types.PChar;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.ByteUtil;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryBuilder;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.TestUtil;
+import org.junit.Test;
+
+
+public abstract class BaseAggregateIT extends ParallelStatsDisabledIT {
+
+private static void initData(Connection conn, String tableName) throws 
SQLException {
+conn.createStatement().execute("create table " + tableName +
+"   (id varchar not null primary key,\n" +
+"uri varchar, appcpu integer)");
+insertRow(conn, tableName, "Report1", 10, 1);
+insertRow(conn, tableName, "Report2", 10, 2);
+insertRow(conn, tableName, "Report3", 30, 3);
+insertRow(conn, tableName, "Report4", 30, 4);
+insertRow(conn, tableName, "SOQL1", 10, 5);
+insertRow(conn, tableName, "SOQL2", 10, 6);
+insertRow(conn, tableName, "SOQL3", 30, 7);
+insertRow(conn, tableName, "SOQL4", 30, 8);
+conn.commit();
+}
+
+private static void insertRow(Connection conn, String tableName, String 
uri, int appcpu, int id) throws SQLException {
+PreparedStatement statement = conn.prepareStatement("UPSERT INTO " + 
tableName + "(id, uri, appcpu) values (?,?,?)");
+statement.setString(1, "id" + id);
+statement.setString(2, uri);
+statement.setInt(3, appcpu);
+statement.executeUpdate();
+}
+
+@Test
+public void testDuplicateTrailingAggExpr() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+props.put(QueryServices.FORCE_ROW_KEY_ORDER_ATTRIB, 
Boolean.FALSE.toString());
+Connection conn = DriverManager.getConnection(getUrl(), props);
+String tableName = generateUniqueName();
+
+conn.createStatement().execute("create table " + tableName +
+"   (nam VARCHAR(20), address VARCHAR(20), id BIGINT "
++ "constraint my_pk primary key (id))");
+PreparedStatement statement = conn.prepareStatement("UPSERT INTO " + 
tableName + "(nam, address, id) values (?,?,?)");
+statement.setString(1, "pulkit");
+statement.setString(2, "badaun");
+statement.setInt(3, 1);
+statement.executeUpdate();
+conn.commit();
+
+QueryBuilder queryBuilder = new QueryBuilder()
+.setDistinct(true)
+.setSelectExpression("'harshit' as 

[1/6] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-06 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 0c3f43384 -> c509d58f1


http://git-wip-us.apache.org/repos/asf/phoenix/blob/c509d58f/phoenix-spark/src/main/java/org/apache/phoenix/spark/SparkResultSet.java
--
diff --git 
a/phoenix-spark/src/main/java/org/apache/phoenix/spark/SparkResultSet.java 
b/phoenix-spark/src/main/java/org/apache/phoenix/spark/SparkResultSet.java
new file mode 100644
index 000..0cb8009
--- /dev/null
+++ b/phoenix-spark/src/main/java/org/apache/phoenix/spark/SparkResultSet.java
@@ -0,0 +1,1056 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.spark;
+
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.exception.SQLExceptionInfo;
+import org.apache.phoenix.util.SQLCloseable;
+import org.apache.spark.sql.Row;
+
+import java.io.InputStream;
+import java.io.Reader;
+import java.math.BigDecimal;
+import java.net.MalformedURLException;
+import java.net.URL;
+import java.sql.Array;
+import java.sql.Blob;
+import java.sql.Clob;
+import java.sql.Date;
+import java.sql.NClob;
+import java.sql.Ref;
+import java.sql.ResultSet;
+import java.sql.ResultSetMetaData;
+import java.sql.RowId;
+import java.sql.SQLException;
+import java.sql.SQLFeatureNotSupportedException;
+import java.sql.SQLWarning;
+import java.sql.SQLXML;
+import java.sql.Statement;
+import java.sql.Time;
+import java.sql.Timestamp;
+import java.util.Arrays;
+import java.util.Calendar;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Helper class to convert a List of Rows returned from a dataset to a sql 
ResultSet
+ */
+public class SparkResultSet implements ResultSet, SQLCloseable {
+
+private int index = -1;
+private List dataSetRows;
+private List columnNames;
+private boolean wasNull = false;
+
+public SparkResultSet(List rows, String[] columnNames) {
+this.dataSetRows = rows;
+this.columnNames = Arrays.asList(columnNames);
+}
+
+private Row getCurrentRow() {
+return dataSetRows.get(index);
+}
+
+@Override
+public boolean absolute(int row) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public void afterLast() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public void beforeFirst() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public void cancelRowUpdates() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public void clearWarnings() throws SQLException {
+}
+
+@Override
+public void close() throws SQLException {
+}
+
+@Override
+public void deleteRow() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public int findColumn(String columnLabel) throws SQLException {
+return columnNames.indexOf(columnLabel.toUpperCase())+1;
+}
+
+@Override
+public boolean first() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public Array getArray(int columnIndex) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public Array getArray(String columnLabel) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public InputStream getAsciiStream(int columnIndex) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public InputStream getAsciiStream(String columnLabel) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+private void checkOpen() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+private void checkCursorState() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public BigDecimal getBigDecimal(int columnIndex) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+

[1/6] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-06 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.3 1b2a3d5c7 -> 9bfaf183a


http://git-wip-us.apache.org/repos/asf/phoenix/blob/9bfaf183/phoenix-spark/src/main/java/org/apache/phoenix/spark/SparkResultSet.java
--
diff --git 
a/phoenix-spark/src/main/java/org/apache/phoenix/spark/SparkResultSet.java 
b/phoenix-spark/src/main/java/org/apache/phoenix/spark/SparkResultSet.java
new file mode 100644
index 000..0cb8009
--- /dev/null
+++ b/phoenix-spark/src/main/java/org/apache/phoenix/spark/SparkResultSet.java
@@ -0,0 +1,1056 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.spark;
+
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.exception.SQLExceptionInfo;
+import org.apache.phoenix.util.SQLCloseable;
+import org.apache.spark.sql.Row;
+
+import java.io.InputStream;
+import java.io.Reader;
+import java.math.BigDecimal;
+import java.net.MalformedURLException;
+import java.net.URL;
+import java.sql.Array;
+import java.sql.Blob;
+import java.sql.Clob;
+import java.sql.Date;
+import java.sql.NClob;
+import java.sql.Ref;
+import java.sql.ResultSet;
+import java.sql.ResultSetMetaData;
+import java.sql.RowId;
+import java.sql.SQLException;
+import java.sql.SQLFeatureNotSupportedException;
+import java.sql.SQLWarning;
+import java.sql.SQLXML;
+import java.sql.Statement;
+import java.sql.Time;
+import java.sql.Timestamp;
+import java.util.Arrays;
+import java.util.Calendar;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Helper class to convert a List of Rows returned from a dataset to a sql 
ResultSet
+ */
+public class SparkResultSet implements ResultSet, SQLCloseable {
+
+private int index = -1;
+private List dataSetRows;
+private List columnNames;
+private boolean wasNull = false;
+
+public SparkResultSet(List rows, String[] columnNames) {
+this.dataSetRows = rows;
+this.columnNames = Arrays.asList(columnNames);
+}
+
+private Row getCurrentRow() {
+return dataSetRows.get(index);
+}
+
+@Override
+public boolean absolute(int row) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public void afterLast() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public void beforeFirst() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public void cancelRowUpdates() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public void clearWarnings() throws SQLException {
+}
+
+@Override
+public void close() throws SQLException {
+}
+
+@Override
+public void deleteRow() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public int findColumn(String columnLabel) throws SQLException {
+return columnNames.indexOf(columnLabel.toUpperCase())+1;
+}
+
+@Override
+public boolean first() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public Array getArray(int columnIndex) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public Array getArray(String columnLabel) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public InputStream getAsciiStream(int columnIndex) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public InputStream getAsciiStream(String columnLabel) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+private void checkOpen() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+private void checkCursorState() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public BigDecimal getBigDecimal(int columnIndex) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+

[4/6] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-06 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/c509d58f/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseOrderByIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseOrderByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseOrderByIT.java
new file mode 100644
index 000..31bf050
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseOrderByIT.java
@@ -0,0 +1,940 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.TestUtil.ROW1;
+import static org.apache.phoenix.util.TestUtil.ROW2;
+import static org.apache.phoenix.util.TestUtil.ROW3;
+import static org.apache.phoenix.util.TestUtil.ROW4;
+import static org.apache.phoenix.util.TestUtil.ROW5;
+import static org.apache.phoenix.util.TestUtil.ROW6;
+import static org.apache.phoenix.util.TestUtil.ROW7;
+import static org.apache.phoenix.util.TestUtil.ROW8;
+import static org.apache.phoenix.util.TestUtil.ROW9;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.assertResultSet;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.util.Properties;
+
+import com.google.common.collect.Lists;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryBuilder;
+import org.junit.Test;
+
+
+public abstract class BaseOrderByIT extends ParallelStatsDisabledIT {
+
+@Test
+public void testMultiOrderByExpr() throws Exception {
+String tenantId = getOrganizationId();
+String tableName = initATableValues(tenantId, 
getDefaultSplits(tenantId), getUrl());
+QueryBuilder queryBuilder = new QueryBuilder()
+.setSelectColumns(
+Lists.newArrayList("ENTITY_ID", "B_STRING"))
+.setFullTableName(tableName)
+.setOrderByClause("B_STRING, ENTITY_ID");
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+ResultSet rs = executeQuery(conn, queryBuilder);
+assertTrue (rs.next());
+assertEquals(ROW1,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW4,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW7,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW2,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW5,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW8,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW3,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW6,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW9,rs.getString(1));
+
+assertFalse(rs.next());
+}
+}
+
+
+@Test
+public void testDescMultiOrderByExpr() throws Exception {
+String tenantId = getOrganizationId();
+String tableName = initATableValues(tenantId, 
getDefaultSplits(tenantId), getUrl());
+QueryBuilder queryBuilder = new QueryBuilder()
+.setSelectColumns(
+Lists.newArrayList("ENTITY_ID", "B_STRING"))
+.setFullTableName(tableName)
+.setOrderByClause("B_STRING || ENTITY_ID DESC");
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+ResultSet rs = executeQuery(conn, queryBuilder);
+assertTrue (rs.next());
+assertEquals(ROW9,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW6,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW3,rs.getString(1));
+assertTrue 

[3/6] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-06 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/9bfaf183/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
index 578a3af..792d08f 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
@@ -17,17 +17,7 @@
  */
 package org.apache.phoenix.end2end;
 
-import static org.apache.phoenix.util.TestUtil.ROW1;
-import static org.apache.phoenix.util.TestUtil.ROW2;
-import static org.apache.phoenix.util.TestUtil.ROW3;
-import static org.apache.phoenix.util.TestUtil.ROW4;
-import static org.apache.phoenix.util.TestUtil.ROW5;
-import static org.apache.phoenix.util.TestUtil.ROW6;
-import static org.apache.phoenix.util.TestUtil.ROW7;
-import static org.apache.phoenix.util.TestUtil.ROW8;
-import static org.apache.phoenix.util.TestUtil.ROW9;
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
-import static org.apache.phoenix.util.TestUtil.assertResultSet;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
@@ -40,83 +30,10 @@ import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.Properties;
 
-import org.apache.phoenix.jdbc.PhoenixStatement;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.junit.Test;
 
-
-public class OrderByIT extends ParallelStatsDisabledIT {
-
-@Test
-public void testMultiOrderByExpr() throws Exception {
-String tenantId = getOrganizationId();
-String tableName = initATableValues(tenantId, 
getDefaultSplits(tenantId), getUrl());
-String query = "SELECT entity_id FROM " + tableName + " ORDER BY 
b_string, entity_id";
-Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-Connection conn = DriverManager.getConnection(getUrl(), props);
-try {
-PreparedStatement statement = conn.prepareStatement(query);
-ResultSet rs = statement.executeQuery();
-assertTrue (rs.next());
-assertEquals(ROW1,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW4,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW7,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW2,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW5,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW8,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW3,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW6,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW9,rs.getString(1));
-
-assertFalse(rs.next());
-} finally {
-conn.close();
-}
-}
-
-
-@Test
-public void testDescMultiOrderByExpr() throws Exception {
-String tenantId = getOrganizationId();
-String tableName = initATableValues(tenantId, 
getDefaultSplits(tenantId), getUrl());
-String query = "SELECT entity_id FROM " + tableName + " ORDER BY 
b_string || entity_id desc";
-Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-Connection conn = DriverManager.getConnection(getUrl(), props);
-try {
-PreparedStatement statement = conn.prepareStatement(query);
-ResultSet rs = statement.executeQuery();
-assertTrue (rs.next());
-assertEquals(ROW9,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW6,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW3,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW8,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW5,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW2,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW7,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW4,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW1,rs.getString(1));
-
-assertFalse(rs.next());
-} finally {
-conn.close();
-}
-}
+public class OrderByIT extends BaseOrderByIT {
 
 @Test
 public void testOrderByWithPosition() throws Exception {
@@ -151,8 +68,8 @@ public class OrderByIT extends ParallelStatsDisabledIT {
 assertTrue(rs.next());
 assertEquals(1,rs.getInt(1));
 assertTrue(rs.next());
-assertEquals(1,rs.getInt(1));  
-assertFalse(rs.next());  
+  

[2/6] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-06 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/c509d58f/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableIT.java
index c9168f1..69c9869 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableIT.java
@@ -37,104 +37,18 @@ import org.apache.phoenix.util.QueryUtil;
 import org.apache.phoenix.util.SchemaUtil;
 import org.junit.Test;
 
-
 /**
  * Tests for table with transparent salting.
  */
 
-public class SaltedTableIT extends ParallelStatsDisabledIT {
-
-   private static String getUniqueTableName() {
-   return SchemaUtil.getTableName(generateUniqueName(), 
generateUniqueName());
-   }
-   
-private static String initTableValues(byte[][] splits) throws Exception {
-   String tableName = getUniqueTableName();
-Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-Connection conn = DriverManager.getConnection(getUrl(), props);
-
-// Rows we inserted:
-// 1ab123abc111
-// 1abc456abc111
-// 1de123abc111
-// 2abc123def222 
-// 3abc123ghi333
-// 4abc123jkl444
-try {
-// Upsert with no column specifies.
-ensureTableCreated(getUrl(), tableName, TABLE_WITH_SALTING, 
splits, null, null);
-String query = "UPSERT INTO " + tableName + " VALUES(?,?,?,?,?)";
-PreparedStatement stmt = conn.prepareStatement(query);
-stmt.setInt(1, 1);
-stmt.setString(2, "ab");
-stmt.setString(3, "123");
-stmt.setString(4, "abc");
-stmt.setInt(5, 111);
-stmt.execute();
-conn.commit();
-
-stmt.setInt(1, 1);
-stmt.setString(2, "abc");
-stmt.setString(3, "456");
-stmt.setString(4, "abc");
-stmt.setInt(5, 111);
-stmt.execute();
-conn.commit();
-
-// Test upsert when statement explicitly specifies the columns to 
upsert into.
-query = "UPSERT INTO " + tableName +
-" (a_integer, a_string, a_id, b_string, b_integer) " + 
-" VALUES(?,?,?,?,?)";
-stmt = conn.prepareStatement(query);
-
-stmt.setInt(1, 1);
-stmt.setString(2, "de");
-stmt.setString(3, "123");
-stmt.setString(4, "abc");
-stmt.setInt(5, 111);
-stmt.execute();
-conn.commit();
-
-stmt.setInt(1, 2);
-stmt.setString(2, "abc");
-stmt.setString(3, "123");
-stmt.setString(4, "def");
-stmt.setInt(5, 222);
-stmt.execute();
-conn.commit();
-
-// Test upsert when order of column is shuffled.
-query = "UPSERT INTO " + tableName +
-" (a_string, a_integer, a_id, b_string, b_integer) " + 
-" VALUES(?,?,?,?,?)";
-stmt = conn.prepareStatement(query);
-stmt.setString(1, "abc");
-stmt.setInt(2, 3);
-stmt.setString(3, "123");
-stmt.setString(4, "ghi");
-stmt.setInt(5, 333);
-stmt.execute();
-conn.commit();
-
-stmt.setString(1, "abc");
-stmt.setInt(2, 4);
-stmt.setString(3, "123");
-stmt.setString(4, "jkl");
-stmt.setInt(5, 444);
-stmt.execute();
-conn.commit();
-} finally {
-conn.close();
-}
-return tableName;
-}
+public class SaltedTableIT extends BaseSaltedTableIT {
 
 @Test
 public void testTableWithInvalidBucketNumber() throws Exception {
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 Connection conn = DriverManager.getConnection(getUrl(), props);
 try {
-String query = "create table " + getUniqueTableName() + " 
(a_integer integer not null CONSTRAINT pk PRIMARY KEY (a_integer)) SALT_BUCKETS 
= 257";
+String query = "create table " + generateUniqueName() + " 
(a_integer integer not null CONSTRAINT pk PRIMARY KEY (a_integer)) SALT_BUCKETS 
= 257";
 PreparedStatement stmt = conn.prepareStatement(query);
 stmt.execute();
 fail("Should have caught exception");
@@ -148,370 +62,12 @@ public class SaltedTableIT extends 
ParallelStatsDisabledIT {
 @Test
 public void testTableWithSplit() throws Exception {
 try {
-createTestTable(getUrl(), "create table " + 

[5/6] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-06 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/c509d58f/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseAggregateIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseAggregateIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseAggregateIT.java
new file mode 100644
index 000..5b466df
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseAggregateIT.java
@@ -0,0 +1,1022 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+import static org.apache.phoenix.util.TestUtil.assertResultSet;
+
+import java.io.IOException;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.List;
+import java.util.Properties;
+
+import com.google.common.collect.Lists;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.compile.QueryPlan;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
+import org.apache.phoenix.jdbc.PhoenixStatement;
+import org.apache.phoenix.query.KeyRange;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.AmbiguousColumnException;
+import org.apache.phoenix.schema.types.PChar;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.ByteUtil;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryBuilder;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.TestUtil;
+import org.junit.Test;
+
+
+public abstract class BaseAggregateIT extends ParallelStatsDisabledIT {
+
+private static void initData(Connection conn, String tableName) throws 
SQLException {
+conn.createStatement().execute("create table " + tableName +
+"   (id varchar not null primary key,\n" +
+"uri varchar, appcpu integer)");
+insertRow(conn, tableName, "Report1", 10, 1);
+insertRow(conn, tableName, "Report2", 10, 2);
+insertRow(conn, tableName, "Report3", 30, 3);
+insertRow(conn, tableName, "Report4", 30, 4);
+insertRow(conn, tableName, "SOQL1", 10, 5);
+insertRow(conn, tableName, "SOQL2", 10, 6);
+insertRow(conn, tableName, "SOQL3", 30, 7);
+insertRow(conn, tableName, "SOQL4", 30, 8);
+conn.commit();
+}
+
+private static void insertRow(Connection conn, String tableName, String 
uri, int appcpu, int id) throws SQLException {
+PreparedStatement statement = conn.prepareStatement("UPSERT INTO " + 
tableName + "(id, uri, appcpu) values (?,?,?)");
+statement.setString(1, "id" + id);
+statement.setString(2, uri);
+statement.setInt(3, appcpu);
+statement.executeUpdate();
+}
+
+@Test
+public void testDuplicateTrailingAggExpr() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+props.put(QueryServices.FORCE_ROW_KEY_ORDER_ATTRIB, 
Boolean.FALSE.toString());
+Connection conn = DriverManager.getConnection(getUrl(), props);
+String tableName = generateUniqueName();
+
+conn.createStatement().execute("create table " + tableName +
+"   (nam VARCHAR(20), address VARCHAR(20), id BIGINT "
++ "constraint my_pk primary key (id))");
+PreparedStatement statement = conn.prepareStatement("UPSERT INTO " + 
tableName + "(nam, address, id) values (?,?,?)");
+statement.setString(1, "pulkit");
+statement.setString(2, "badaun");
+statement.setInt(3, 1);
+statement.executeUpdate();
+conn.commit();
+
+QueryBuilder queryBuilder = new QueryBuilder()
+.setDistinct(true)
+.setSelectExpression("'harshit' as 

[3/6] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-06 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/c509d58f/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
index 578a3af..792d08f 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
@@ -17,17 +17,7 @@
  */
 package org.apache.phoenix.end2end;
 
-import static org.apache.phoenix.util.TestUtil.ROW1;
-import static org.apache.phoenix.util.TestUtil.ROW2;
-import static org.apache.phoenix.util.TestUtil.ROW3;
-import static org.apache.phoenix.util.TestUtil.ROW4;
-import static org.apache.phoenix.util.TestUtil.ROW5;
-import static org.apache.phoenix.util.TestUtil.ROW6;
-import static org.apache.phoenix.util.TestUtil.ROW7;
-import static org.apache.phoenix.util.TestUtil.ROW8;
-import static org.apache.phoenix.util.TestUtil.ROW9;
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
-import static org.apache.phoenix.util.TestUtil.assertResultSet;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
@@ -40,83 +30,10 @@ import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.Properties;
 
-import org.apache.phoenix.jdbc.PhoenixStatement;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.junit.Test;
 
-
-public class OrderByIT extends ParallelStatsDisabledIT {
-
-@Test
-public void testMultiOrderByExpr() throws Exception {
-String tenantId = getOrganizationId();
-String tableName = initATableValues(tenantId, 
getDefaultSplits(tenantId), getUrl());
-String query = "SELECT entity_id FROM " + tableName + " ORDER BY 
b_string, entity_id";
-Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-Connection conn = DriverManager.getConnection(getUrl(), props);
-try {
-PreparedStatement statement = conn.prepareStatement(query);
-ResultSet rs = statement.executeQuery();
-assertTrue (rs.next());
-assertEquals(ROW1,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW4,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW7,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW2,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW5,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW8,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW3,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW6,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW9,rs.getString(1));
-
-assertFalse(rs.next());
-} finally {
-conn.close();
-}
-}
-
-
-@Test
-public void testDescMultiOrderByExpr() throws Exception {
-String tenantId = getOrganizationId();
-String tableName = initATableValues(tenantId, 
getDefaultSplits(tenantId), getUrl());
-String query = "SELECT entity_id FROM " + tableName + " ORDER BY 
b_string || entity_id desc";
-Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-Connection conn = DriverManager.getConnection(getUrl(), props);
-try {
-PreparedStatement statement = conn.prepareStatement(query);
-ResultSet rs = statement.executeQuery();
-assertTrue (rs.next());
-assertEquals(ROW9,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW6,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW3,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW8,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW5,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW2,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW7,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW4,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW1,rs.getString(1));
-
-assertFalse(rs.next());
-} finally {
-conn.close();
-}
-}
+public class OrderByIT extends BaseOrderByIT {
 
 @Test
 public void testOrderByWithPosition() throws Exception {
@@ -151,8 +68,8 @@ public class OrderByIT extends ParallelStatsDisabledIT {
 assertTrue(rs.next());
 assertEquals(1,rs.getInt(1));
 assertTrue(rs.next());
-assertEquals(1,rs.getInt(1));  
-assertFalse(rs.next());  
+  

[6/6] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-06 Thread tdsilva
PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using 
phoenix-spark


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/c509d58f
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/c509d58f
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/c509d58f

Branch: refs/heads/4.x-HBase-1.2
Commit: c509d58f12b73ba9e24b53d2e9ca0271666a400d
Parents: 0c3f433
Author: Thomas D'Silva 
Authored: Thu Oct 18 22:00:01 2018 -0700
Committer: Thomas D'Silva 
Committed: Tue Nov 6 14:52:08 2018 -0800

--
 .../org/apache/phoenix/end2end/AggregateIT.java |  987 +---
 .../apache/phoenix/end2end/BaseAggregateIT.java | 1022 +
 .../apache/phoenix/end2end/BaseOrderByIT.java   |  940 
 .../org/apache/phoenix/end2end/OrderByIT.java   |  943 ++--
 .../end2end/ParallelStatsDisabledIT.java|   40 +
 .../end2end/salted/BaseSaltedTableIT.java   |  474 
 .../phoenix/end2end/salted/SaltedTableIT.java   |  450 +---
 .../org/apache/phoenix/util/QueryBuilder.java   |  211 
 .../java/org/apache/phoenix/util/QueryUtil.java |   38 +-
 .../index/IndexScrutinyTableOutputTest.java |6 +-
 .../util/PhoenixConfigurationUtilTest.java  |6 +-
 .../org/apache/phoenix/util/QueryUtilTest.java  |   10 +-
 phoenix-spark/pom.xml   |8 +
 .../org/apache/phoenix/spark/AggregateIT.java   |   91 ++
 .../org/apache/phoenix/spark/OrderByIT.java |  460 
 .../org/apache/phoenix/spark/SaltedTableIT.java |   53 +
 .../org/apache/phoenix/spark/SparkUtil.java |   87 ++
 .../apache/phoenix/spark/PhoenixSparkIT.scala   |9 +-
 .../apache/phoenix/spark/SparkResultSet.java| 1056 ++
 .../org/apache/phoenix/spark/PhoenixRDD.scala   |   27 +-
 pom.xml |2 +-
 21 files changed, 4650 insertions(+), 2270 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/c509d58f/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java
index 2059311..8916d4d 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java
@@ -18,506 +18,28 @@
 package org.apache.phoenix.end2end;
 
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.assertResultSet;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
-import static org.apache.phoenix.util.TestUtil.assertResultSet;
 
-import java.io.IOException;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
-import java.sql.Statement;
-import java.util.List;
 import java.util.Properties;
 
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.phoenix.compile.QueryPlan;
-import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
-import org.apache.phoenix.jdbc.PhoenixStatement;
-import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.schema.AmbiguousColumnException;
-import org.apache.phoenix.schema.types.PChar;
-import org.apache.phoenix.schema.types.PInteger;
-import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.PropertiesUtil;
-import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.QueryBuilder;
 import org.apache.phoenix.util.TestUtil;
 import org.junit.Test;
 
+public class AggregateIT extends BaseAggregateIT {
 
-public class AggregateIT extends ParallelStatsDisabledIT {
-private static void initData(Connection conn, String tableName) throws 
SQLException {
-conn.createStatement().execute("create table " + tableName +
-"   (id varchar not null primary key,\n" +
-"uri varchar, appcpu integer)");
-insertRow(conn, tableName, "Report1", 10, 1);
-insertRow(conn, tableName, "Report2", 10, 2);
-insertRow(conn, tableName, "Report3", 30, 3);
-insertRow(conn, tableName, "Report4", 30, 4);
-insertRow(conn, tableName, "SOQL1", 10, 5);
-insertRow(conn, tableName, "SOQL2", 10, 6);
-insertRow(conn, tableName, "SOQL3", 30, 7);
-insertRow(conn, tableName, "SOQL4", 30, 8);
-conn.commit();
-}
-
-private static void insertRow(Connection conn, String tableName, String 
uri, int appcpu, 

phoenix git commit: PHOENIX-5004 Fix org.jboss.netty.channel.ChannelException: Failed to bind to: flapper (addendum)

2018-11-06 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/omid2 e13de05a2 -> 0f2b1b8ba


PHOENIX-5004 Fix org.jboss.netty.channel.ChannelException: Failed to bind to: 
flapper (addendum)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/0f2b1b8b
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/0f2b1b8b
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/0f2b1b8b

Branch: refs/heads/omid2
Commit: 0f2b1b8ba9995e668edac908ef0bf1b4d72167ec
Parents: e13de05
Author: James Taylor 
Authored: Tue Nov 6 12:39:45 2018 -0800
Committer: James Taylor 
Committed: Tue Nov 6 12:39:45 2018 -0800

--
 .../transaction/OmidTransactionProvider.java| 25 +++-
 .../phoenix/query/QueryServicesTestImpl.java|  8 ---
 .../java/org/apache/phoenix/util/TestUtil.java  | 14 +++
 3 files changed, 22 insertions(+), 25 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/0f2b1b8b/phoenix-core/src/main/java/org/apache/phoenix/transaction/OmidTransactionProvider.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/transaction/OmidTransactionProvider.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/transaction/OmidTransactionProvider.java
index 98b56ad..610a5d1 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/transaction/OmidTransactionProvider.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/transaction/OmidTransactionProvider.java
@@ -18,7 +18,6 @@
 package org.apache.phoenix.transaction;
 
 import java.io.IOException;
-import java.net.ServerSocket;
 import java.sql.SQLException;
 import java.util.Arrays;
 
@@ -42,14 +41,11 @@ import org.apache.phoenix.exception.SQLExceptionInfo;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.ConnectionInfo;
 import org.apache.phoenix.transaction.TransactionFactory.Provider;
-import org.slf4j.Logger;
-import org.slf4j.LoggerFactory;
 
 import com.google.inject.Guice;
 import com.google.inject.Injector;
 
 public class OmidTransactionProvider implements PhoenixTransactionProvider {
-private static final Logger logger = 
LoggerFactory.getLogger(OmidTransactionProvider.class);
 private static final OmidTransactionProvider INSTANCE = new 
OmidTransactionProvider();
 public static final String OMID_TSO_PORT = "phoenix.omid.tso.port";
 public static final String OMID_TSO_CONFLICT_MAP_SIZE = 
"phoenix.omid.tso.conflict.map.size";
@@ -122,31 +118,16 @@ public class OmidTransactionProvider implements 
PhoenixTransactionProvider {
 return commitTableClient;
 }
 
-/**
- * Find a random free port in localhost for binding.
- * @return A port number or -1 for failure.
- */
-private static int getRandomPort() {
-  try (ServerSocket socket = new ServerSocket(0)) {
-return socket.getLocalPort();
-  } catch (IOException e) {
-return -1;
-  }
-}
-
 @Override
 public PhoenixTransactionService getTransactionService(Configuration 
config, ConnectionInfo connectionInfo) throws  SQLException{
 TSOServerConfig tsoConfig = new TSOServerConfig();
 TSOServer tso;
 
-int port;
 String portStr = config.get(OMID_TSO_PORT);
-if (portStr == null) { // For testing, we generate a random port.
-port = getRandomPort();
-logger.warn("Using random port for " + OMID_TSO_PORT + " of " + 
port);
-} else {
-port = Integer.parseInt(portStr);
+if (portStr == null) {
+throw new IllegalArgumentException(OMID_TSO_PORT + " config 
parameter must be bound");
 }
+int  port = Integer.parseInt(portStr);
 
 tsoConfig.setPort(port);
 tsoConfig.setConflictMapSize(config.getInt(OMID_TSO_CONFLICT_MAP_SIZE, 
DEFAULT_OMID_TSO_CONFLICT_MAP_SIZE));

http://git-wip-us.apache.org/repos/asf/phoenix/blob/0f2b1b8b/phoenix-core/src/test/java/org/apache/phoenix/query/QueryServicesTestImpl.java
--
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/query/QueryServicesTestImpl.java
 
b/phoenix-core/src/test/java/org/apache/phoenix/query/QueryServicesTestImpl.java
index ab45633..49fb8e8 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/query/QueryServicesTestImpl.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/query/QueryServicesTestImpl.java
@@ -22,10 +22,11 @@ import static 
org.apache.phoenix.query.QueryServicesOptions.withDefaults;
 
 import org.apache.curator.shaded.com.google.common.io.Files;
 import org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec;
+import org.apache.phoenix.transaction.OmidTransactionProvider;
 

phoenix git commit: PHOENIX-5004 Fix org.jboss.netty.channel.ChannelException: Failed to bind to: flapper

2018-11-06 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/omid2 5728e183f -> e13de05a2


PHOENIX-5004 Fix org.jboss.netty.channel.ChannelException: Failed to bind to: 
flapper


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/e13de05a
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/e13de05a
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/e13de05a

Branch: refs/heads/omid2
Commit: e13de05a252a8eb56e1292225199cdefb81c
Parents: 5728e18
Author: James Taylor 
Authored: Tue Nov 6 12:21:32 2018 -0800
Committer: James Taylor 
Committed: Tue Nov 6 12:21:32 2018 -0800

--
 .../org/apache/phoenix/end2end/IndexToolIT.java |  8 ---
 .../transaction/OmidTransactionProvider.java| 25 +---
 .../phoenix/query/QueryServicesTestImpl.java|  4 +---
 3 files changed, 28 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/e13de05a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index c99f145..e096bb5 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
@@ -58,7 +58,6 @@ import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.TestUtil;
 import org.junit.BeforeClass;
 import org.junit.Test;
-import org.junit.experimental.categories.Category;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
 import org.junit.runners.Parameterized.Parameters;
@@ -67,8 +66,7 @@ import com.google.common.collect.Lists;
 import com.google.common.collect.Maps;
 
 @RunWith(Parameterized.class)
-@Category(NeedsOwnMiniClusterTest.class)
-public class IndexToolIT extends ParallelStatsEnabledIT {
+public class IndexToolIT extends BaseUniqueNamesOwnClusterIT {
 
 private final boolean localIndex;
 private final boolean transactional;
@@ -99,9 +97,13 @@ public class IndexToolIT extends ParallelStatsEnabledIT {
 @BeforeClass
 public static void setup() throws Exception {
 Map serverProps = Maps.newHashMapWithExpectedSize(2);
+serverProps.put(QueryServices.STATS_GUIDEPOST_WIDTH_BYTES_ATTRIB, 
Long.toString(20));
+
serverProps.put(QueryServices.MAX_SERVER_METADATA_CACHE_TIME_TO_LIVE_MS_ATTRIB, 
Long.toString(5));
 serverProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB,
 QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
 Map clientProps = Maps.newHashMapWithExpectedSize(2);
+clientProps.put(QueryServices.USE_STATS_FOR_PARALLELIZATION, 
Boolean.toString(true));
+clientProps.put(QueryServices.STATS_UPDATE_FREQ_MS_ATTRIB, 
Long.toString(5));
 clientProps.put(QueryServices.TRANSACTIONS_ENABLED, 
Boolean.TRUE.toString());
 clientProps.put(QueryServices.FORCE_ROW_KEY_ORDER_ATTRIB, 
Boolean.TRUE.toString());
 setUpTestDriver(new ReadOnlyProps(serverProps.entrySet().iterator()),

http://git-wip-us.apache.org/repos/asf/phoenix/blob/e13de05a/phoenix-core/src/main/java/org/apache/phoenix/transaction/OmidTransactionProvider.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/transaction/OmidTransactionProvider.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/transaction/OmidTransactionProvider.java
index 610a5d1..98b56ad 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/transaction/OmidTransactionProvider.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/transaction/OmidTransactionProvider.java
@@ -18,6 +18,7 @@
 package org.apache.phoenix.transaction;
 
 import java.io.IOException;
+import java.net.ServerSocket;
 import java.sql.SQLException;
 import java.util.Arrays;
 
@@ -41,11 +42,14 @@ import org.apache.phoenix.exception.SQLExceptionInfo;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.ConnectionInfo;
 import org.apache.phoenix.transaction.TransactionFactory.Provider;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import com.google.inject.Guice;
 import com.google.inject.Injector;
 
 public class OmidTransactionProvider implements PhoenixTransactionProvider {
+private static final Logger logger = 
LoggerFactory.getLogger(OmidTransactionProvider.class);
 private static final OmidTransactionProvider INSTANCE = new 
OmidTransactionProvider();
 public static final String OMID_TSO_PORT = "phoenix.omid.tso.port";
 public static final String OMID_TSO_CONFLICT_MAP_SIZE = 

Build failed in Jenkins: Phoenix-omid2 #142

2018-11-06 Thread Apache Jenkins Server
See 


Changes:

[jamestaylor] add omid coprocessor classes to phoenix-server. default tso 
world_time.

--
[...truncated 217.59 KB...]
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.092 s 
- in org.apache.phoenix.end2end.SequencePointInTimeIT
[INFO] Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 84.954 
s - in org.apache.phoenix.end2end.RegexBulkLoadToolIT
[INFO] Running org.apache.phoenix.end2end.SpillableGroupByIT
[INFO] Running org.apache.phoenix.end2end.SplitIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.775 s 
- in org.apache.phoenix.end2end.SpillableGroupByIT
[INFO] Running org.apache.phoenix.end2end.StatsEnabledSplitSystemCatalogIT
[INFO] Running 
org.apache.phoenix.end2end.SysTableNamespaceMappedStatsCollectorIT
[INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 154.699 
s - in org.apache.phoenix.end2end.StatsEnabledSplitSystemCatalogIT
[INFO] Running org.apache.phoenix.end2end.SystemCatalogCreationOnConnectionIT
[WARNING] Tests run: 42, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 
223.852 s - in 
org.apache.phoenix.end2end.SysTableNamespaceMappedStatsCollectorIT
[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 334.299 
s - in org.apache.phoenix.end2end.SplitIT
[INFO] Running org.apache.phoenix.end2end.SystemTablePermissionsIT
[INFO] Running org.apache.phoenix.end2end.SystemCatalogIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 81.846 s 
- in org.apache.phoenix.end2end.SystemCatalogIT
[INFO] Running org.apache.phoenix.end2end.TableDDLPermissionsIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 141.574 
s - in org.apache.phoenix.end2end.SystemTablePermissionsIT
[INFO] Running org.apache.phoenix.end2end.TableSnapshotReadsMapReduceIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.467 s 
- in org.apache.phoenix.end2end.TableSnapshotReadsMapReduceIT
[INFO] Running org.apache.phoenix.end2end.UpdateCacheAcrossDifferentClientsIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 47.376 s 
- in org.apache.phoenix.end2end.UpdateCacheAcrossDifferentClientsIT
[INFO] Running org.apache.phoenix.end2end.UserDefinedFunctionsIT
[INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 172.474 
s - in org.apache.phoenix.end2end.UserDefinedFunctionsIT
[ERROR] Tests run: 120, Failures: 0, Errors: 16, Skipped: 0, Time elapsed: 
1,981.921 s <<< FAILURE! - in org.apache.phoenix.end2end.IndexToolIT
[ERROR] 
testSecondaryIndex[transactionProvider=TEPHRA,mutable=false,localIndex=false,directApi=false,useSnapshot=false](org.apache.phoenix.end2end.IndexToolIT)
  Time elapsed: 10.211 s  <<< ERROR!
java.lang.RuntimeException: org.apache.thrift.TException: Unable to discover 
transaction service.
at 
org.apache.phoenix.end2end.IndexToolIT.testSecondaryIndex(IndexToolIT.java:150)
Caused by: org.apache.thrift.TException: Unable to discover transaction service.
at 
org.apache.phoenix.end2end.IndexToolIT.testSecondaryIndex(IndexToolIT.java:150)

[ERROR] 
testSecondaryIndex[transactionProvider=TEPHRA,mutable=false,localIndex=false,directApi=false,useSnapshot=true](org.apache.phoenix.end2end.IndexToolIT)
  Time elapsed: 10.005 s  <<< ERROR!
java.lang.RuntimeException: org.apache.thrift.TException: Unable to discover 
transaction service.
at 
org.apache.phoenix.end2end.IndexToolIT.testSecondaryIndex(IndexToolIT.java:150)
Caused by: org.apache.thrift.TException: Unable to discover transaction service.
at 
org.apache.phoenix.end2end.IndexToolIT.testSecondaryIndex(IndexToolIT.java:150)

[ERROR] 
testSecondaryIndex[transactionProvider=TEPHRA,mutable=false,localIndex=false,directApi=true,useSnapshot=false](org.apache.phoenix.end2end.IndexToolIT)
  Time elapsed: 10.007 s  <<< ERROR!
java.lang.RuntimeException: org.apache.thrift.TException: Unable to discover 
transaction service.
at 
org.apache.phoenix.end2end.IndexToolIT.testSecondaryIndex(IndexToolIT.java:150)
Caused by: org.apache.thrift.TException: Unable to discover transaction service.
at 
org.apache.phoenix.end2end.IndexToolIT.testSecondaryIndex(IndexToolIT.java:150)

[ERROR] 
testSecondaryIndex[transactionProvider=TEPHRA,mutable=false,localIndex=false,directApi=true,useSnapshot=true](org.apache.phoenix.end2end.IndexToolIT)
  Time elapsed: 10.005 s  <<< ERROR!
java.lang.RuntimeException: org.apache.thrift.TException: Unable to discover 
transaction service.
at 
org.apache.phoenix.end2end.IndexToolIT.testSecondaryIndex(IndexToolIT.java:150)
Caused by: org.apache.thrift.TException: Unable to discover transaction service.
at 
org.apache.phoenix.end2end.IndexToolIT.testSecondaryIndex(IndexToolIT.java:150)

[ERROR] 

phoenix git commit: add omid coprocessor classes to phoenix-server. default tso world_time. fix omid run script parameters

2018-11-06 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/omid2 0fcca0a76 -> 5728e183f


add omid coprocessor classes to phoenix-server. default tso world_time. fix 
omid run script parameters


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/5728e183
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/5728e183
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/5728e183

Branch: refs/heads/omid2
Commit: 5728e183fdb4cbec5aed6c01bf958611733bef4d
Parents: 0fcca0a
Author: Yonatan Gottesman 
Authored: Sun Nov 4 09:03:31 2018 +0200
Committer: James Taylor 
Committed: Tue Nov 6 07:16:59 2018 -0800

--
 bin/omid-env.sh   | 22 +++---
 bin/omid-server-configuration.yml |  3 +++
 phoenix-server/pom.xml|  1 +
 3 files changed, 23 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/5728e183/bin/omid-env.sh
--
diff --git a/bin/omid-env.sh b/bin/omid-env.sh
index 578382c..820cdaa 100644
--- a/bin/omid-env.sh
+++ b/bin/omid-env.sh
@@ -21,7 +21,23 @@
 # 
-
 # Check if HADOOP_CONF_DIR and HBASE_CONF_DIR are set
 # 
-
+export JVM_FLAGS=-Xmx4096m
+if [ -z ${HADOOP_CONF_DIR+x} ]; then
+if [ -z ${HADOOP_HOME+x} ]; then
+echo "WARNING: HADOOP_HOME or HADOOP_CONF_DIR are unset";
+else
+export HADOOP_CONF_DIR=${HADOOP_HOME}/conf
+fi
+else
+echo "HADOOP_CONF_DIR is set to '$HADOOP_CONF_DIR'";
+fi
 
-if [ -z ${HADOOP_CONF_DIR+x} ]; then echo "WARNING: HADOOP_CONF_DIR is unset"; 
else echo "HADOOP_CONF_DIR is set to '$HADOOP_CONF_DIR'"; fi
-if [ -z ${HBASE_CONF_DIR+x} ]; then echo "WARNING: HBASE_CONF_DIR is unset"; 
else echo "HBASE_CONF_DIR is set to '$HBASE_CONF_DIR'"; fi
-
+if [ -z ${HBASE_CONF_DIR+x} ]; then
+if [ -z ${HBASE_HOME+x} ]; then
+echo "WARNING: HBASE_HOME or HBASE_CONF_DIR are unset";
+else
+export HBASE_CONF_DIR=${HBASE_HOME}/conf
+fi
+else
+echo "HBASE_CONF_DIR is set to '$HBASE_CONF_DIR'";
+fi

http://git-wip-us.apache.org/repos/asf/phoenix/blob/5728e183/bin/omid-server-configuration.yml
--
diff --git a/bin/omid-server-configuration.yml 
b/bin/omid-server-configuration.yml
index ab80667..8d1616e 100644
--- a/bin/omid-server-configuration.yml
+++ b/bin/omid-server-configuration.yml
@@ -20,3 +20,6 @@ metrics: !!org.apache.omid.metrics.CodahaleMetricsProvider [
   csvDir: "csvMetrics",
 }
 ]
+
+timestampType: WORLD_TIME
+lowLatency: false

http://git-wip-us.apache.org/repos/asf/phoenix/blob/5728e183/phoenix-server/pom.xml
--
diff --git a/phoenix-server/pom.xml b/phoenix-server/pom.xml
index f5ba7f7..daf9fe5 100644
--- a/phoenix-server/pom.xml
+++ b/phoenix-server/pom.xml
@@ -125,6 +125,7 @@
   org.iq80.snappy:snappy
   org.antlr:antlr*
   org.apache.tephra:tephra*
+  org.apache.omid:omid*
   com.google.code.gson:gson
   org.jruby.joni:joni
   org.jruby.jcodings:jcodings



Build failed in Jenkins: Phoenix Compile Compatibility with HBase #809

2018-11-06 Thread Apache Jenkins Server
See 


--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on H25 (ubuntu xenial) in workspace 

[Phoenix_Compile_Compat_wHBase] $ /bin/bash /tmp/jenkins1090866219399251589.sh
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 386407
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 6
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 10240
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited
core id : 0
core id : 1
core id : 2
core id : 3
core id : 4
core id : 5
physical id : 0
physical id : 1
MemTotal:   98957636 kB
MemFree:17911380 kB
Filesystem  Size  Used Avail Use% Mounted on
udev 48G 0   48G   0% /dev
tmpfs   9.5G  154M  9.3G   2% /run
/dev/sda3   3.6T  145G  3.3T   5% /
tmpfs48G  472K   48G   1% /dev/shm
tmpfs   5.0M 0  5.0M   0% /run/lock
tmpfs48G 0   48G   0% /sys/fs/cgroup
/dev/sda2   473M   55M  394M  13% /boot
/dev/loop0   88M   88M 0 100% /snap/core/5662
/dev/loop1   28M   28M 0 100% /snap/snapcraft/1871
/dev/loop5   68M   68M 0 100% /snap/lxd/9334
/dev/loop2   52M   52M 0 100% /snap/lxd/9354
/dev/loop3   88M   88M 0 100% /snap/core/5742
tmpfs   9.5G 0  9.5G   0% /run/user/910
/dev/loop6   52M   52M 0 100% /snap/lxd/9412
apache-maven-2.2.1
apache-maven-3.0.4
apache-maven-3.0.5
apache-maven-3.1.1
apache-maven-3.2.1
apache-maven-3.2.5
apache-maven-3.3.3
apache-maven-3.3.9
apache-maven-3.5.0
apache-maven-3.5.2
apache-maven-3.5.4
latest
latest2
latest3


===
Verifying compile level compatibility with HBase 0.98 with Phoenix 
4.x-HBase-0.98
===

Cloning into 'hbase'...
Switched to a new branch '0.98'
Branch 0.98 set up to track remote branch 0.98 from origin.
[ERROR] Plugin org.codehaus.mojo:findbugs-maven-plugin:2.5.2 or one of its 
dependencies could not be resolved: Failed to read artifact descriptor for 
org.codehaus.mojo:findbugs-maven-plugin:jar:2.5.2: Could not transfer artifact 
org.codehaus.mojo:findbugs-maven-plugin:pom:2.5.2 from/to central 
(https://repo.maven.apache.org/maven2): Received fatal alert: protocol_version 
-> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginResolutionException
Build step 'Execute shell' marked build as failure