Build failed in Jenkins: Phoenix-Calcite #106

2017-04-21 Thread Apache Jenkins Server
See 


Changes:

[rajeshbabu] PHOENIX-3715 Skip Error code assertions in test cases(Rajeshbabu)

--
[...truncated 3.04 MB...]

testGlobalPhoenixMetricsForMutations(org.apache.phoenix.monitoring.PhoenixMetricsIT)
  Time elapsed: 2.332 sec  <<< FAILURE!
java.lang.AssertionError: expected:<10> but was:<0>
at 
org.apache.phoenix.monitoring.PhoenixMetricsIT.testGlobalPhoenixMetricsForMutations(PhoenixMetricsIT.java:140)

testMetricsForDeleteWithAutoCommit(org.apache.phoenix.monitoring.PhoenixMetricsIT)
  Time elapsed: 2.623 sec  <<< FAILURE!
java.lang.AssertionError: The two metrics have different or unequal number of 
table names 
at 
org.apache.phoenix.monitoring.PhoenixMetricsIT.assertMetricsAreSame(PhoenixMetricsIT.java:649)
at 
org.apache.phoenix.monitoring.PhoenixMetricsIT.testMetricsForDeleteWithAutoCommit(PhoenixMetricsIT.java:439)

testMetricsForUpsertSelect(org.apache.phoenix.monitoring.PhoenixMetricsIT)  
Time elapsed: 4.763 sec  <<< FAILURE!
java.lang.AssertionError: Mutation batch sizes didn't match! expected:<10> but 
was:<20>
at 
org.apache.phoenix.monitoring.PhoenixMetricsIT.assertMutationMetrics(PhoenixMetricsIT.java:816)
at 
org.apache.phoenix.monitoring.PhoenixMetricsIT.testMetricsForUpsertSelect(PhoenixMetricsIT.java:303)

Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 168.291 sec - 
in org.apache.phoenix.iterate.ScannerLeaseRenewalIT

Results :

Failed tests: 
  
FlappingLocalIndexIT.testLocalIndexScanWithSmallChunks:104->BaseTest.assertEquals:1786
 expected:<[z]> but was:<[a]>
  
FlappingLocalIndexIT.testLocalIndexScanWithSmallChunks:104->BaseTest.assertEquals:1786
 expected:<[z]> but was:<[a]>
  FlappingLocalIndexIT.testLocalIndexScan:164->BaseTest.assertEquals:1830 
expected:<3> but was:<0>
  FlappingLocalIndexIT.testLocalIndexScan:164->BaseTest.assertEquals:1830 
expected:<3> but was:<0>
  
IndexExtendedIT.testLocalIndexScanAfterRegionsMerge:557->BaseTest.assertEquals:1786
 expected:<[z]> but was:<[a]>
  QueryTimeoutIT.testQueryTimeout:136 Total time of query was 3574 ms, but 
expected to be greater than 1000
  QueryTimeoutIT.testSetRPCTimeOnConnection:102
  
StatsCollectorIT.testUpdateStatsWithMultipleTables:301->upsertValues:341->upsertStmt:406
  
StatsCollectorIT.testUpdateStatsWithMultipleTables:301->upsertValues:341->upsertStmt:406
  
StatsCollectorIT.testUpdateStatsWithMultipleTables:301->upsertValues:341->upsertStmt:406
  
StatsCollectorIT.testUpdateStatsWithMultipleTables:301->upsertValues:341->upsertStmt:406
  
StatsCollectorIT.testUpdateStatsWithMultipleTables:301->upsertValues:341->upsertStmt:406
  
StatsCollectorIT.testUpdateStatsWithMultipleTables:301->upsertValues:341->upsertStmt:406
  
StatsCollectorIT.testUpdateStatsWithMultipleTables:301->upsertValues:341->upsertStmt:406
  
StatsCollectorIT.testUpdateStatsWithMultipleTables:301->upsertValues:341->upsertStmt:406
  
StatsCollectorIT.testUpdateStatsWithMultipleTables:301->upsertValues:341->upsertStmt:406
  
StatsCollectorIT.testUpdateStatsWithMultipleTables:301->upsertValues:341->upsertStmt:406
  
StatsCollectorIT.testUpdateStatsWithMultipleTables:301->upsertValues:341->upsertStmt:406
  
StatsCollectorIT.testUpdateStatsWithMultipleTables:301->upsertValues:341->upsertStmt:406
  StatsCollectorIT.testUpdateStats:234->upsertValues:341->upsertStmt:406
  StatsCollectorIT.testUpdateStats:234->upsertValues:341->upsertStmt:406
  StatsCollectorIT.testUpdateStats:234->upsertValues:341->upsertStmt:406
  StatsCollectorIT.testUpdateStats:234->upsertValues:341->upsertStmt:406
  StatsCollectorIT.testUpdateStats:234->upsertValues:341->upsertStmt:406
  StatsCollectorIT.testUpdateStats:234->upsertValues:341->upsertStmt:406
  StatsCollectorIT.testUpdateStats:234->upsertValues:341->upsertStmt:406
  StatsCollectorIT.testUpdateStats:234->upsertValues:341->upsertStmt:406
  StatsCollectorIT.testUpdateStats:234->upsertValues:341->upsertStmt:406
  StatsCollectorIT.testUpdateStats:234->upsertValues:341->upsertStmt:406
  StatsCollectorIT.testUpdateStats:234->upsertValues:341->upsertStmt:406
  StatsCollectorIT.testUpdateStats:234->upsertValues:341->upsertStmt:406
  StatsCollectorIT.testWithMultiCF:559->BaseTest.assertEquals:1830 
expected:<12> but was:<13>
  StatsCollectorIT.testWithMultiCF:559->BaseTest.assertEquals:1830 
expected:<12> but was:<13>
  StatsCollectorIT.testWithMultiCF:559->BaseTest.assertEquals:1830 
expected:<12> but was:<13>
  StatsCollectorIT.testWithMultiCF:559->BaseTest.assertEquals:1830 
expected:<12> but was:<13>
  
SysTableNamespaceMappedStatsCollectorIT>StatsCollectorIT.testUpdateStatsWithMultipleTables:301->StatsCollectorIT.upsertValues:341->StatsCollectorIT.upsertStmt:406
  
SysTableNamespaceMappedStatsCollectorIT>StatsCollectorIT.testUpdateStatsWithMultipleTables:301->StatsCollectorIT.upsertValues:341->StatsCollectorIT.upsertStmt:406
  

[2/2] phoenix git commit: PHOENIX-3715 Skip Error code assertions in test cases(Rajeshbabu)

2017-04-21 Thread rajeshbabu
PHOENIX-3715 Skip Error code assertions in test cases(Rajeshbabu)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/3a16fa99
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/3a16fa99
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/3a16fa99

Branch: refs/heads/calcite
Commit: 3a16fa99bb4edb526bfec3d87c3dd219531f09cb
Parents: 32e806c
Author: Rajeshbabu Chintaguntla 
Authored: Sat Apr 22 00:01:39 2017 +0530
Committer: Rajeshbabu Chintaguntla 
Committed: Sat Apr 22 00:01:39 2017 +0530

--
 .../AlterMultiTenantTableWithViewsIT.java   |   8 +-
 .../apache/phoenix/end2end/AlterTableIT.java|  56 
 .../phoenix/end2end/AlterTableWithViewsIT.java  |  23 ++--
 .../phoenix/end2end/AppendOnlySchemaIT.java |   3 +-
 .../phoenix/end2end/ArithmeticQueryIT.java  |  13 +-
 .../phoenix/end2end/AutoPartitionViewsIT.java   |   5 +-
 .../phoenix/end2end/CoalesceFunctionIT.java |   3 +-
 .../end2end/ColumnEncodedBytesPropIT.java   |   5 +-
 .../end2end/ConvertTimezoneFunctionIT.java  |   5 +-
 .../apache/phoenix/end2end/CreateSchemaIT.java  |   5 +-
 .../apache/phoenix/end2end/CreateTableIT.java   |   6 +-
 .../phoenix/end2end/DecodeFunctionIT.java   |   7 +-
 .../phoenix/end2end/DefaultColumnValueIT.java   |   5 +-
 .../phoenix/end2end/DisableLocalIndexIT.java|   4 +-
 .../apache/phoenix/end2end/DropSchemaIT.java|   4 +-
 .../apache/phoenix/end2end/DynamicUpsertIT.java |   3 +-
 .../phoenix/end2end/EncodeFunctionIT.java   |   2 +-
 .../phoenix/end2end/ExecuteStatementsIT.java|   5 +-
 .../org/apache/phoenix/end2end/HashJoinIT.java  |   5 +-
 .../end2end/ImmutableTablePropertiesIT.java |  11 +-
 .../end2end/QueryDatabaseMetaDataIT.java|   2 +-
 .../org/apache/phoenix/end2end/QueryIT.java |   3 +-
 .../apache/phoenix/end2end/QueryTimeoutIT.java  |   3 +-
 .../end2end/SequenceBulkAllocationIT.java   |  17 +--
 .../org/apache/phoenix/end2end/SequenceIT.java  |  39 +++---
 .../apache/phoenix/end2end/SortMergeJoinIT.java |   5 +-
 .../apache/phoenix/end2end/TenantIdTypeIT.java  |   9 +-
 .../end2end/TenantSpecificTablesDDLIT.java  |  15 ++-
 .../end2end/TimezoneOffsetFunctionIT.java   |   3 +-
 .../org/apache/phoenix/end2end/UnionAllIT.java  |   3 +-
 .../apache/phoenix/end2end/UpsertSelectIT.java  |   4 +-
 .../apache/phoenix/end2end/UpsertValuesIT.java  |   4 +-
 .../org/apache/phoenix/end2end/UseSchemaIT.java |   2 +-
 .../phoenix/end2end/VariableLengthPKIT.java |   7 +-
 .../java/org/apache/phoenix/end2end/ViewIT.java |  21 +--
 .../phoenix/end2end/index/ImmutableIndexIT.java |   2 +-
 .../end2end/index/IndexExpressionIT.java|   2 +-
 .../apache/phoenix/end2end/index/IndexIT.java   |   2 +-
 .../phoenix/end2end/index/IndexMetadataIT.java  |  16 +--
 .../end2end/index/ReadOnlyIndexFailureIT.java   |   2 +-
 .../phoenix/monitoring/PhoenixMetricsIT.java|   3 +-
 .../phoenix/tx/FlappingTransactionIT.java   |   2 +-
 .../phoenix/tx/ParameterizedTransactionIT.java  |   6 +-
 .../org/apache/phoenix/tx/TransactionIT.java|  10 +-
 .../phoenix/compile/QueryCompilerTest.java  | 128 +--
 .../phoenix/compile/ViewCompilerTest.java   |   3 +-
 .../phoenix/execute/CorrelatePlanTest.java  |   3 +-
 .../apache/phoenix/jdbc/PhoenixDriverTest.java  |   4 +-
 .../jdbc/PhoenixPreparedStatementTest.java  |   5 +-
 .../apache/phoenix/parse/QueryParserTest.java   |  23 ++--
 .../java/org/apache/phoenix/query/BaseTest.java |   1 +
 .../org/apache/phoenix/schema/MutationTest.java |  11 +-
 .../apache/phoenix/schema/SchemaUtilTest.java   |   3 +-
 .../phoenix/schema/types/PDataTypeTest.java |   2 +-
 .../org/apache/phoenix/util/ColumnInfoTest.java |   2 +-
 .../java/org/apache/phoenix/util/TestUtil.java  |  11 ++
 56 files changed, 302 insertions(+), 254 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/3a16fa99/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterMultiTenantTableWithViewsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterMultiTenantTableWithViewsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterMultiTenantTableWithViewsIT.java
index 89df159..8bf9f35 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterMultiTenantTableWithViewsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterMultiTenantTableWithViewsIT.java
@@ -452,7 +452,7 @@ public class AlterMultiTenantTableWithViewsIT extends 
ParallelStatsDisabledIT {
 try {
 tenant2Conn.createStatement().execute("SELECT KV FROM " + 
divergedView);
 } catch 

[1/2] phoenix git commit: PHOENIX-3715 Skip Error code assertions in test cases(Rajeshbabu)

2017-04-21 Thread rajeshbabu
Repository: phoenix
Updated Branches:
  refs/heads/calcite 32e806ce3 -> 3a16fa99b


http://git-wip-us.apache.org/repos/asf/phoenix/blob/3a16fa99/phoenix-core/src/it/java/org/apache/phoenix/end2end/TimezoneOffsetFunctionIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TimezoneOffsetFunctionIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TimezoneOffsetFunctionIT.java
index 65d47be..c6b4302 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TimezoneOffsetFunctionIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TimezoneOffsetFunctionIT.java
@@ -31,6 +31,7 @@ import java.util.Calendar;
 import java.util.TimeZone;
 
 import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.util.TestUtil;
 import org.junit.Test;
 
 
@@ -82,7 +83,7 @@ public class TimezoneOffsetFunctionIT extends 
ParallelStatsDisabledIT {
assertEquals(0, rs.getInt(3));
fail();
 } catch (SQLException e) {
-assertEquals(SQLExceptionCode.ILLEGAL_DATA.getErrorCode(), 
e.getErrorCode());
+
TestUtil.assertErrorCodeEquals(SQLExceptionCode.ILLEGAL_DATA.getErrorCode(), 
e.getErrorCode());
 }
}
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/3a16fa99/phoenix-core/src/it/java/org/apache/phoenix/end2end/UnionAllIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UnionAllIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UnionAllIT.java
index eaeb6e3..df79480 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UnionAllIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UnionAllIT.java
@@ -35,6 +35,7 @@ import java.util.Properties;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.TestUtil;
 import org.junit.Test;
 
 public class UnionAllIT extends ParallelStatsDisabledIT {
@@ -297,7 +298,7 @@ public class UnionAllIT extends ParallelStatsDisabledIT {
 conn.createStatement().executeQuery(ddl);
 fail();
 }  catch (SQLException e) {
-
assertEquals(SQLExceptionCode.SELECT_COLUMN_NUM_IN_UNIONALL_DIFFS.getErrorCode(),
 e.getErrorCode());
+
TestUtil.assertErrorCodeEquals(SQLExceptionCode.SELECT_COLUMN_NUM_IN_UNIONALL_DIFFS.getErrorCode(),
 e.getErrorCode());
 } finally {
 conn.close();
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/3a16fa99/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectIT.java
index 745af34..e538575 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectIT.java
@@ -1316,7 +1316,7 @@ public class UpsertSelectIT extends 
BaseClientManagedTimeIT {
 stmt.executeUpdate();
 fail();
 } catch (SQLException e) {
-assertEquals(SQLExceptionCode.ILLEGAL_DATA.getErrorCode(), 
e.getErrorCode());
+
TestUtil.assertErrorCodeEquals(SQLExceptionCode.ILLEGAL_DATA.getErrorCode(), 
e.getErrorCode());
 }
 }
 
@@ -1426,7 +1426,7 @@ public class UpsertSelectIT extends 
BaseClientManagedTimeIT {
 conn.commit();
 fail();
 } catch (SQLException e) {
-
assertEquals(SQLExceptionCode.DATA_EXCEEDS_MAX_CAPACITY.getErrorCode(), 
e.getErrorCode());
+
TestUtil.assertErrorCodeEquals(SQLExceptionCode.DATA_EXCEEDS_MAX_CAPACITY.getErrorCode(),
 e.getErrorCode());
 }
 }
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/3a16fa99/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertValuesIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertValuesIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertValuesIT.java
index a055482..7a87b66 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertValuesIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertValuesIT.java
@@ -312,7 +312,7 @@ public class UpsertValuesIT extends BaseClientManagedTimeIT 
{
 stmt.execute("upsert into UpsertWithDesc values (to_char(100), 
to_char(100), to_char(100))");
 fail();
 } catch (SQLException e) {
-
assertEquals(SQLExceptionCode.UPSERT_COLUMN_NUMBERS_MISMATCH.getErrorCode(),e.getErrorCode());
+

Apache-Phoenix | 4.x-HBase-0.98 | Build Successful

2017-04-21 Thread Apache Jenkins Server
4.x-HBase-0.98 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-0.98

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/lastCompletedBuild/testReport/

Changes
[ankitsinghal59] PHOENIX-3751 spark 2.1 with Phoenix 4.10 load data as dataframe fail,

[ankitsinghal59] PHOENIX-3792 Provide way to skip normalization of column names in

[ankitsinghal59] PHOENIX-3759 Dropping a local index causes NPE



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Apache Phoenix - Timeout crawler - Build https://builds.apache.org/job/Phoenix-master/1602/

2017-04-21 Thread Apache Jenkins Server
[...truncated 21 lines...]
Looking at the log, list of test(s) that timed-out:

Build:
https://builds.apache.org/job/Phoenix-master/1602/


Affected test class(es):
Set(['org.apache.phoenix.tx.TxCheckpointIT', 
'org.apache.phoenix.end2end.index.MutableIndexIT', 
'org.apache.phoenix.end2end.index.IndexIT', 
'org.apache.phoenix.end2end.SortMergeJoinIT'])


Build step 'Execute shell' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

Build failed in Jenkins: Phoenix | Master #1602

2017-04-21 Thread Apache Jenkins Server
See 


Changes:

[ankitsinghal59] PHOENIX-3751 spark 2.1 with Phoenix 4.10 load data as 
dataframe fail,

[ankitsinghal59] PHOENIX-3792 Provide way to skip normalization of column names 
in

[ankitsinghal59] PHOENIX-3759 Dropping a local index causes NPE

--
[...truncated 74.73 KB...]
Tests run: 52, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 201.579 sec - 
in org.apache.phoenix.tx.ParameterizedTransactionIT

Results :

Tests run: 1514, Failures: 0, Errors: 0, Skipped: 4

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test 
(ClientManagedTimeTests) @ phoenix-core ---

---
 T E S T S
---
Running org.apache.phoenix.end2end.ColumnProjectionOptimizationIT
Running org.apache.phoenix.end2end.CreateSchemaIT
Running org.apache.phoenix.end2end.ArrayIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.343 sec - in 
org.apache.phoenix.end2end.CreateSchemaIT
Running org.apache.phoenix.end2end.CreateTableIT
Running org.apache.phoenix.end2end.CustomEntityDataIT
Running org.apache.phoenix.end2end.CaseStatementIT
Running org.apache.phoenix.end2end.ClientTimeArithmeticQueryIT
Running org.apache.phoenix.end2end.CastAndCoerceIT
Running org.apache.phoenix.end2end.AggregateQueryIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.283 sec - in 
org.apache.phoenix.end2end.CustomEntityDataIT
Running org.apache.phoenix.end2end.DerivedTableIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.536 sec - in 
org.apache.phoenix.end2end.ColumnProjectionOptimizationIT
Running org.apache.phoenix.end2end.DistinctCountIT
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 56.862 sec - 
in org.apache.phoenix.end2end.DerivedTableIT
Running org.apache.phoenix.end2end.DropSchemaIT
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.08 sec - in 
org.apache.phoenix.end2end.DistinctCountIT
Running org.apache.phoenix.end2end.ExtendedQueryExecIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.341 sec - in 
org.apache.phoenix.end2end.DropSchemaIT
Running org.apache.phoenix.end2end.FunkyNamesIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.349 sec - in 
org.apache.phoenix.end2end.ExtendedQueryExecIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.555 sec - in 
org.apache.phoenix.end2end.FunkyNamesIT
Running org.apache.phoenix.end2end.GroupByIT
Running org.apache.phoenix.end2end.MutableQueryIT
Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 156.669 sec - 
in org.apache.phoenix.end2end.CaseStatementIT
Tests run: 80, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 165.212 sec - 
in org.apache.phoenix.end2end.ArrayIT
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 160.99 sec - 
in org.apache.phoenix.end2end.CastAndCoerceIT
Running org.apache.phoenix.end2end.NotQueryIT
Running org.apache.phoenix.end2end.PointInTimeQueryIT
Running org.apache.phoenix.end2end.NativeHBaseTypesIT
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 171.406 sec - 
in org.apache.phoenix.end2end.AggregateQueryIT
Running org.apache.phoenix.end2end.ProductMetricsIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.246 sec - in 
org.apache.phoenix.end2end.NativeHBaseTypesIT
Running org.apache.phoenix.end2end.QueryDatabaseMetaDataIT
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 206.576 sec - 
in org.apache.phoenix.end2end.CreateTableIT
Running org.apache.phoenix.end2end.QueryIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.979 sec - in 
org.apache.phoenix.end2end.PointInTimeQueryIT
Running org.apache.phoenix.end2end.ReadIsolationLevelIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.804 sec - in 
org.apache.phoenix.end2end.ReadIsolationLevelIT
Running org.apache.phoenix.end2end.RowValueConstructorIT
Tests run: 61, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.526 sec - 
in org.apache.phoenix.end2end.ProductMetricsIT
Tests run: 91, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 174.141 sec - 
in org.apache.phoenix.end2end.GroupByIT
Running org.apache.phoenix.end2end.ScanQueryIT
Running org.apache.phoenix.end2end.SequenceBulkAllocationIT
Tests run: 77, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 143.989 sec - 
in org.apache.phoenix.end2end.NotQueryIT
Tests run: 245, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 322.32 sec - 
in org.apache.phoenix.end2end.ClientTimeArithmeticQueryIT
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 149.466 sec - 
in org.apache.phoenix.end2end.QueryDatabaseMetaDataIT
Running org.apache.phoenix.end2end.ToNumberFunctionIT
Running org.apache.phoenix.end2end.SequenceIT
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time 

Build failed in Jenkins: Phoenix-4.x-HBase-1.1 #381

2017-04-21 Thread Apache Jenkins Server
See 


Changes:

[ankitsinghal59] PHOENIX-3751 spark 2.1 with Phoenix 4.10 load data as 
dataframe fail,

[ankitsinghal59] PHOENIX-3792 Provide way to skip normalization of column names 
in

[ankitsinghal59] PHOENIX-3759 Dropping a local index causes NPE

--
[...truncated 41.82 KB...]
Running org.apache.phoenix.expression.SortOrderExpressionTest
Running org.apache.phoenix.expression.ArrayConstructorExpressionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.005 sec - in 
org.apache.phoenix.expression.ArrayConstructorExpressionTest
Running org.apache.phoenix.expression.PowerFunctionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.027 sec - in 
org.apache.phoenix.expression.PowerFunctionTest
Running org.apache.phoenix.expression.function.ExternalSqlTypeIdFunctionTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.008 sec - in 
org.apache.phoenix.expression.function.ExternalSqlTypeIdFunctionTest
Running org.apache.phoenix.expression.function.BuiltinFunctionConstructorTest
Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.102 sec - in 
org.apache.phoenix.expression.SortOrderExpressionTest
Running org.apache.phoenix.expression.function.InstrFunctionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002 sec - in 
org.apache.phoenix.expression.function.InstrFunctionTest
Running org.apache.phoenix.expression.ArrayFillFunctionTest
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.082 sec - in 
org.apache.phoenix.expression.function.BuiltinFunctionConstructorTest
Running org.apache.phoenix.expression.RegexpReplaceFunctionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.011 sec - in 
org.apache.phoenix.expression.RegexpReplaceFunctionTest
Running org.apache.phoenix.expression.SqrtFunctionTest
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.045 sec - in 
org.apache.phoenix.expression.ArrayFillFunctionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.015 sec - in 
org.apache.phoenix.expression.SqrtFunctionTest
Running org.apache.phoenix.expression.CbrtFunctionTest
Running org.apache.phoenix.expression.LnLogFunctionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.027 sec - in 
org.apache.phoenix.expression.CbrtFunctionTest
Running org.apache.phoenix.expression.ColumnExpressionTest
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 sec - in 
org.apache.phoenix.expression.ColumnExpressionTest
Running org.apache.phoenix.expression.AbsFunctionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.055 sec - in 
org.apache.phoenix.expression.LnLogFunctionTest
Running org.apache.phoenix.expression.StringToArrayFunctionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.076 sec - in 
org.apache.phoenix.expression.AbsFunctionTest
Running org.apache.phoenix.query.ConnectionQueryServicesImplTest
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.067 sec - in 
org.apache.phoenix.expression.StringToArrayFunctionTest
Running org.apache.phoenix.query.OrderByTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.676 sec - in 
org.apache.phoenix.expression.NullValueTest
Running org.apache.phoenix.query.ConnectionlessTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.347 sec - in 
org.apache.phoenix.query.ConnectionQueryServicesImplTest
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.996 sec - in 
org.apache.phoenix.expression.ArrayConcatFunctionTest
Running org.apache.phoenix.query.KeyRangeUnionTest
Running org.apache.phoenix.query.KeyRangeIntersectTest
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.367 sec - in 
org.apache.phoenix.query.OrderByTest
Running org.apache.phoenix.query.HBaseFactoryProviderTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.01 sec - in 
org.apache.phoenix.query.HBaseFactoryProviderTest
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.034 sec - in 
org.apache.phoenix.query.KeyRangeUnionTest
Running org.apache.phoenix.query.ScannerLeaseRenewalTest
Running org.apache.phoenix.query.EncodedColumnQualifierCellsListTest
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.021 sec - in 
org.apache.phoenix.query.EncodedColumnQualifierCellsListTest
Running org.apache.phoenix.query.KeyRangeMoreTest
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.126 sec - in 
org.apache.phoenix.query.KeyRangeIntersectTest
Running org.apache.phoenix.query.ParallelIteratorsSplitTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.086 sec - in 
org.apache.phoenix.query.ConnectionlessTest
Running org.apache.phoenix.query.QueryPlanTest
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time 

[1/3] phoenix git commit: PHOENIX-3751 spark 2.1 with Phoenix 4.10 load data as dataframe fail, NullPointerException

2017-04-21 Thread ankit
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 452867b2c -> 301e961ff


PHOENIX-3751 spark 2.1 with Phoenix 4.10 load data as dataframe fail, 
NullPointerException


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/9e7a9970
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/9e7a9970
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/9e7a9970

Branch: refs/heads/4.x-HBase-0.98
Commit: 9e7a9970273e6cdb8751f400afa23c510605b147
Parents: 452867b
Author: Ankit Singhal 
Authored: Fri Apr 21 11:54:56 2017 +0530
Committer: Ankit Singhal 
Committed: Fri Apr 21 11:54:56 2017 +0530

--
 phoenix-spark/src/it/resources/globalSetup.sql   | 2 +-
 .../src/main/scala/org/apache/phoenix/spark/PhoenixRDD.scala | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/9e7a9970/phoenix-spark/src/it/resources/globalSetup.sql
--
diff --git a/phoenix-spark/src/it/resources/globalSetup.sql 
b/phoenix-spark/src/it/resources/globalSetup.sql
index 28eb0f7..dc24da7 100644
--- a/phoenix-spark/src/it/resources/globalSetup.sql
+++ b/phoenix-spark/src/it/resources/globalSetup.sql
@@ -60,4 +60,4 @@ UPSERT INTO "small" VALUES ('key3', 'xyz', 3)
 CREATE TABLE MULTITENANT_TEST_TABLE (TENANT_ID VARCHAR NOT NULL, 
ORGANIZATION_ID VARCHAR, GLOBAL_COL1 VARCHAR  CONSTRAINT pk PRIMARY KEY 
(TENANT_ID, ORGANIZATION_ID)) MULTI_TENANT=true
 CREATE TABLE IF NOT EXISTS GIGANTIC_TABLE (ID INTEGER PRIMARY KEY,unsig_id 
UNSIGNED_INT,big_id BIGINT,unsig_long_id UNSIGNED_LONG,tiny_id 
TINYINT,unsig_tiny_id UNSIGNED_TINYINT,small_id SMALLINT,unsig_small_id 
UNSIGNED_SMALLINT,float_id FLOAT,unsig_float_id UNSIGNED_FLOAT,double_id 
DOUBLE,unsig_double_id UNSIGNED_DOUBLE,decimal_id DECIMAL,boolean_id 
BOOLEAN,time_id TIME,date_id DATE,timestamp_id TIMESTAMP,unsig_time_id 
UNSIGNED_TIME,unsig_date_id UNSIGNED_DATE,unsig_timestamp_id 
UNSIGNED_TIMESTAMP,varchar_id VARCHAR (30),char_id CHAR (30),binary_id BINARY 
(100),varbinary_id VARBINARY (100))
  CREATE TABLE IF NOT EXISTS OUTPUT_GIGANTIC_TABLE (ID INTEGER PRIMARY 
KEY,unsig_id UNSIGNED_INT,big_id BIGINT,unsig_long_id UNSIGNED_LONG,tiny_id 
TINYINT,unsig_tiny_id UNSIGNED_TINYINT,small_id SMALLINT,unsig_small_id 
UNSIGNED_SMALLINT,float_id FLOAT,unsig_float_id UNSIGNED_FLOAT,double_id 
DOUBLE,unsig_double_id UNSIGNED_DOUBLE,decimal_id DECIMAL,boolean_id 
BOOLEAN,time_id TIME,date_id DATE,timestamp_id TIMESTAMP,unsig_time_id 
UNSIGNED_TIME,unsig_date_id UNSIGNED_DATE,unsig_timestamp_id 
UNSIGNED_TIMESTAMP,varchar_id VARCHAR (30),char_id CHAR (30),binary_id BINARY 
(100),varbinary_id VARBINARY (100))
- upsert into GIGANTIC_TABLE 
values(0,2,3,4,-5,6,7,8,9.3,10.4,11.5,12.6,13.7,true,CURRENT_TIME(),CURRENT_DATE(),CURRENT_TIME(),CURRENT_TIME(),CURRENT_DATE(),CURRENT_TIME(),'This
 is random textA','a','a','a')
+ upsert into GIGANTIC_TABLE 
values(0,2,3,4,-5,6,7,8,9.3,10.4,11.5,12.6,13.7,true,null,null,CURRENT_TIME(),CURRENT_TIME(),CURRENT_DATE(),CURRENT_TIME(),'This
 is random textA','a','a','a')

http://git-wip-us.apache.org/repos/asf/phoenix/blob/9e7a9970/phoenix-spark/src/main/scala/org/apache/phoenix/spark/PhoenixRDD.scala
--
diff --git 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/PhoenixRDD.scala 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/PhoenixRDD.scala
index 63547d2..2c2c6e1 100644
--- a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/PhoenixRDD.scala
+++ b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/PhoenixRDD.scala
@@ -134,9 +134,9 @@ class PhoenixRDD(sc: SparkContext, table: String, columns: 
Seq[String],
   val rowSeq = columns.map { case (name, sqlType) =>
 val res = pr.resultMap(name)
   // Special handling for data types
-  if (dateAsTimestamp && (sqlType == 91 || sqlType == 19)) { // 91 is 
the defined type for Date and 19 for UNSIGNED_DATE
+  if (dateAsTimestamp && (sqlType == 91 || sqlType == 19) && 
res!=null) { // 91 is the defined type for Date and 19 for UNSIGNED_DATE
 new java.sql.Timestamp(res.asInstanceOf[java.sql.Date].getTime)
-  } else if (sqlType == 92 || sqlType == 18) { // 92 is the defined 
type for Time and 18 for UNSIGNED_TIME
+  } else if ((sqlType == 92 || sqlType == 18) && res!=null) { // 92 is 
the defined type for Time and 18 for UNSIGNED_TIME
 new java.sql.Timestamp(res.asInstanceOf[java.sql.Time].getTime)
   } else {
 res



[3/3] phoenix git commit: PHOENIX-3759 Dropping a local index causes NPE

2017-04-21 Thread ankit
PHOENIX-3759 Dropping a local index causes NPE


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/301e961f
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/301e961f
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/301e961f

Branch: refs/heads/4.x-HBase-0.98
Commit: 301e961ffeb92a6ef382784b5657be51b9063b5c
Parents: fa5281e
Author: Ankit Singhal 
Authored: Fri Apr 21 11:57:58 2017 +0530
Committer: Ankit Singhal 
Committed: Fri Apr 21 11:57:58 2017 +0530

--
 .../org/apache/phoenix/end2end/index/LocalIndexIT.java | 13 +++--
 .../main/java/org/apache/phoenix/util/RepairUtil.java  | 11 +++
 2 files changed, 18 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/301e961f/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
index 8d3316b..1534cd2 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
@@ -599,7 +599,6 @@ public class LocalIndexIT extends BaseLocalIndexIT {
 admin.disableTable(tableName);
 copyLocalIndexHFiles(config, tableRegions.get(0), 
tableRegions.get(1), false);
 copyLocalIndexHFiles(config, tableRegions.get(3), 
tableRegions.get(0), false);
-
 admin.enableTable(tableName);
 
 int count=getCount(conn, tableName, "L#0");
@@ -607,13 +606,23 @@ public class LocalIndexIT extends BaseLocalIndexIT {
 admin.majorCompact(tableName);
 int tryCount = 5;// need to wait for rebuilding of corrupted local 
index region
 while (tryCount-- > 0 && count != 14) {
-Thread.sleep(3);
+Thread.sleep(15000);
 count = getCount(conn, tableName, "L#0");
 }
 assertEquals(14, count);
 rs = statement.executeQuery("SELECT COUNT(*) FROM " + indexName1);
 assertTrue(rs.next());
 assertEquals(7, rs.getLong(1));
+statement.execute("DROP INDEX " + indexName1 + " ON " + tableName);
+admin.majorCompact(tableName);
+statement.execute("DROP INDEX " + indexName + " ON " + tableName);
+admin.majorCompact(tableName);
+Thread.sleep(15000);
+admin.majorCompact(tableName);
+Thread.sleep(15000);
+rs = statement.executeQuery("SELECT COUNT(*) FROM " + tableName);
+assertTrue(rs.next());
+
 }
 }
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/301e961f/phoenix-core/src/main/java/org/apache/phoenix/util/RepairUtil.java
--
diff --git a/phoenix-core/src/main/java/org/apache/phoenix/util/RepairUtil.java 
b/phoenix-core/src/main/java/org/apache/phoenix/util/RepairUtil.java
index b9b7526..ea14715 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/util/RepairUtil.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/RepairUtil.java
@@ -29,10 +29,13 @@ public class RepairUtil {
 byte[] endKey = environment.getRegion().getRegionInfo().getEndKey();
 byte[] indexKeyEmbedded = startKey.length == 0 ? new 
byte[endKey.length] : startKey;
 for (StoreFile file : store.getStorefiles()) {
-byte[] fileFirstRowKey = 
KeyValue.createKeyValueFromKey(file.getReader().getFirstKey()).getRow();;
-if ((fileFirstRowKey != null && 
Bytes.compareTo(file.getReader().getFirstKey(), 0, indexKeyEmbedded.length,
-indexKeyEmbedded, 0, indexKeyEmbedded.length) != 0)
-/*|| (endKey.length > 0 && 
Bytes.compareTo(file.getLastKey(), endKey) < 0)*/) { return false; }
+if (file.getReader() != null && file.getReader().getFirstKey() != 
null) {
+byte[] fileFirstRowKey = 
KeyValue.createKeyValueFromKey(file.getReader().getFirstKey()).getRow();
+;
+if ((fileFirstRowKey != null && 
Bytes.compareTo(file.getReader().getFirstKey(), 0,
+indexKeyEmbedded.length, indexKeyEmbedded, 0, 
indexKeyEmbedded.length) != 0)
+/* || (endKey.length > 0 && Bytes.compareTo(file.getLastKey(), 
endKey) < 0) */) { return false; }
+}
 }
 return true;
 }



[2/3] phoenix git commit: PHOENIX-3792 Provide way to skip normalization of column names in phoenix-spark integration

2017-04-21 Thread ankit
PHOENIX-3792 Provide way to skip normalization of column names in phoenix-spark 
integration


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/fa5281eb
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/fa5281eb
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/fa5281eb

Branch: refs/heads/4.x-HBase-0.98
Commit: fa5281ebd119d24c9f0a3d274376a774c5334a37
Parents: 9e7a997
Author: Ankit Singhal 
Authored: Fri Apr 21 11:55:12 2017 +0530
Committer: Ankit Singhal 
Committed: Fri Apr 21 11:55:12 2017 +0530

--
 phoenix-spark/src/it/resources/globalSetup.sql  |  1 +
 .../apache/phoenix/spark/PhoenixSparkIT.scala   | 27 ++--
 .../phoenix/spark/DataFrameFunctions.scala  | 19 +++---
 .../apache/phoenix/spark/DefaultSource.scala|  2 +-
 4 files changed, 42 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/fa5281eb/phoenix-spark/src/it/resources/globalSetup.sql
--
diff --git a/phoenix-spark/src/it/resources/globalSetup.sql 
b/phoenix-spark/src/it/resources/globalSetup.sql
index dc24da7..7ac0039 100644
--- a/phoenix-spark/src/it/resources/globalSetup.sql
+++ b/phoenix-spark/src/it/resources/globalSetup.sql
@@ -17,6 +17,7 @@
 CREATE TABLE table1 (id BIGINT NOT NULL PRIMARY KEY, col1 VARCHAR)
 CREATE TABLE table1_copy (id BIGINT NOT NULL PRIMARY KEY, col1 VARCHAR)
 CREATE TABLE table2 (id BIGINT NOT NULL PRIMARY KEY, table1_id BIGINT, 
"t2col1" VARCHAR)
+CREATE TABLE table3 (id BIGINT NOT NULL PRIMARY KEY, table3_id BIGINT, 
"t2col1" VARCHAR)
 UPSERT INTO table1 (id, col1) VALUES (1, 'test_row_1')
 UPSERT INTO table2 (id, table1_id, "t2col1") VALUES (1, 1, 'test_child_1')
 UPSERT INTO table2 (id, table1_id, "t2col1") VALUES (2, 1, 'test_child_2')

http://git-wip-us.apache.org/repos/asf/phoenix/blob/fa5281eb/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala
--
diff --git 
a/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala 
b/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala
index bb8c302..528b33a 100644
--- a/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala
+++ b/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala
@@ -20,15 +20,38 @@ import org.apache.phoenix.util.{ColumnInfo, SchemaUtil}
 import org.apache.spark.sql.types._
 import org.apache.spark.sql.{Row, SQLContext, SaveMode}
 import org.joda.time.DateTime
-
+import org.apache.spark.{SparkConf, SparkContext}
 import scala.collection.mutable.ListBuffer
-
+import org.apache.hadoop.conf.Configuration
 /**
   * Note: If running directly from an IDE, these are the recommended VM 
parameters:
   * -Xmx1536m -XX:MaxPermSize=512m -XX:ReservedCodeCacheSize=512m
   */
 class PhoenixSparkIT extends AbstractPhoenixSparkIT {
 
+  test("Can persist data with case senstive columns (like in avro schema) 
using 'DataFrame.saveToPhoenix'") {
+val sqlContext = new SQLContext(sc)
+val df = sqlContext.createDataFrame(
+  Seq(
+(1, 1, "test_child_1"),
+(2, 1, "test_child_2"))).toDF("ID", "TABLE3_ID", "t2col1")
+df.saveToPhoenix("TABLE3", zkUrl = 
Some(quorumAddress),skipNormalizingIdentifier=true)
+
+// Verify results
+val stmt = conn.createStatement()
+val rs = stmt.executeQuery("SELECT * FROM TABLE3")
+
+val checkResults = List((1, 1, "test_child_1"), (2, 1, "test_child_2"))
+val results = ListBuffer[(Long, Long, String)]()
+while (rs.next()) {
+  results.append((rs.getLong(1), rs.getLong(2), rs.getString(3)))
+}
+stmt.close()
+
+results.toList shouldEqual checkResults
+
+  }
+  
   test("Can convert Phoenix schema") {
 val phoenixSchema = List(
   new ColumnInfo("varcharColumn", PVarchar.INSTANCE.getSqlType)

http://git-wip-us.apache.org/repos/asf/phoenix/blob/fa5281eb/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
--
diff --git 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
index ddf4fab..92f4c58 100644
--- 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
+++ 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
@@ -24,13 +24,16 @@ import scala.collection.JavaConversions._
 
 
 class DataFrameFunctions(data: DataFrame) extends Serializable {
-
+  def saveToPhoenix(parameters: Map[String, String]): Unit = {
+   

[3/3] phoenix git commit: PHOENIX-3759 Dropping a local index causes NPE

2017-04-21 Thread ankit
PHOENIX-3759 Dropping a local index causes NPE


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/dcafe80f
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/dcafe80f
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/dcafe80f

Branch: refs/heads/4.x-HBase-1.1
Commit: dcafe80f53b3d22b54a1d7c783a1bdf731261b90
Parents: fae0027
Author: Ankit Singhal 
Authored: Fri Apr 21 11:53:22 2017 +0530
Committer: Ankit Singhal 
Committed: Fri Apr 21 11:53:22 2017 +0530

--
 .../apache/phoenix/end2end/index/LocalIndexIT.java   | 15 ---
 .../java/org/apache/phoenix/util/RepairUtil.java | 11 +++
 2 files changed, 19 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/dcafe80f/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
index 8d3316b..ea4780b 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
@@ -599,21 +599,30 @@ public class LocalIndexIT extends BaseLocalIndexIT {
 admin.disableTable(tableName);
 copyLocalIndexHFiles(config, tableRegions.get(0), 
tableRegions.get(1), false);
 copyLocalIndexHFiles(config, tableRegions.get(3), 
tableRegions.get(0), false);
-
 admin.enableTable(tableName);
 
 int count=getCount(conn, tableName, "L#0");
 assertTrue(count > 14);
-admin.majorCompact(tableName);
+admin.majorCompact(TableName.valueOf(tableName));
 int tryCount = 5;// need to wait for rebuilding of corrupted local 
index region
 while (tryCount-- > 0 && count != 14) {
-Thread.sleep(3);
+Thread.sleep(15000);
 count = getCount(conn, tableName, "L#0");
 }
 assertEquals(14, count);
 rs = statement.executeQuery("SELECT COUNT(*) FROM " + indexName1);
 assertTrue(rs.next());
 assertEquals(7, rs.getLong(1));
+statement.execute("DROP INDEX " + indexName1 + " ON " + tableName);
+admin.majorCompact(TableName.valueOf(tableName));
+statement.execute("DROP INDEX " + indexName + " ON " + tableName);
+admin.majorCompact(TableName.valueOf(tableName));
+Thread.sleep(15000);
+admin.majorCompact(TableName.valueOf(tableName));
+Thread.sleep(15000);
+rs = statement.executeQuery("SELECT COUNT(*) FROM " + tableName);
+assertTrue(rs.next());
+
 }
 }
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/dcafe80f/phoenix-core/src/main/java/org/apache/phoenix/util/RepairUtil.java
--
diff --git a/phoenix-core/src/main/java/org/apache/phoenix/util/RepairUtil.java 
b/phoenix-core/src/main/java/org/apache/phoenix/util/RepairUtil.java
index b9b7526..ea14715 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/util/RepairUtil.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/RepairUtil.java
@@ -29,10 +29,13 @@ public class RepairUtil {
 byte[] endKey = environment.getRegion().getRegionInfo().getEndKey();
 byte[] indexKeyEmbedded = startKey.length == 0 ? new 
byte[endKey.length] : startKey;
 for (StoreFile file : store.getStorefiles()) {
-byte[] fileFirstRowKey = 
KeyValue.createKeyValueFromKey(file.getReader().getFirstKey()).getRow();;
-if ((fileFirstRowKey != null && 
Bytes.compareTo(file.getReader().getFirstKey(), 0, indexKeyEmbedded.length,
-indexKeyEmbedded, 0, indexKeyEmbedded.length) != 0)
-/*|| (endKey.length > 0 && 
Bytes.compareTo(file.getLastKey(), endKey) < 0)*/) { return false; }
+if (file.getReader() != null && file.getReader().getFirstKey() != 
null) {
+byte[] fileFirstRowKey = 
KeyValue.createKeyValueFromKey(file.getReader().getFirstKey()).getRow();
+;
+if ((fileFirstRowKey != null && 
Bytes.compareTo(file.getReader().getFirstKey(), 0,
+indexKeyEmbedded.length, indexKeyEmbedded, 0, 
indexKeyEmbedded.length) != 0)
+/* || (endKey.length > 0 && Bytes.compareTo(file.getLastKey(), 
endKey) < 0) */) { return false; }
+}
 }
 return true;
 }



[1/3] phoenix git commit: PHOENIX-3751 spark 2.1 with Phoenix 4.10 load data as dataframe fail, NullPointerException

2017-04-21 Thread ankit
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 785c4680e -> dcafe80f5


PHOENIX-3751 spark 2.1 with Phoenix 4.10 load data as dataframe fail, 
NullPointerException


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/31f5e15c
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/31f5e15c
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/31f5e15c

Branch: refs/heads/4.x-HBase-1.1
Commit: 31f5e15ceb118b266e634de61899447d4acd6775
Parents: 785c468
Author: Ankit Singhal 
Authored: Fri Apr 21 11:52:20 2017 +0530
Committer: Ankit Singhal 
Committed: Fri Apr 21 11:52:20 2017 +0530

--
 phoenix-spark/src/it/resources/globalSetup.sql   | 2 +-
 .../src/main/scala/org/apache/phoenix/spark/PhoenixRDD.scala | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/31f5e15c/phoenix-spark/src/it/resources/globalSetup.sql
--
diff --git a/phoenix-spark/src/it/resources/globalSetup.sql 
b/phoenix-spark/src/it/resources/globalSetup.sql
index 17a6162..28e0146 100644
--- a/phoenix-spark/src/it/resources/globalSetup.sql
+++ b/phoenix-spark/src/it/resources/globalSetup.sql
@@ -60,4 +60,4 @@ UPSERT INTO "small" VALUES ('key3', 'xyz', 3)
 CREATE TABLE MULTITENANT_TEST_TABLE (TENANT_ID VARCHAR NOT NULL, 
ORGANIZATION_ID VARCHAR, GLOBAL_COL1 VARCHAR  CONSTRAINT pk PRIMARY KEY 
(TENANT_ID, ORGANIZATION_ID)) MULTI_TENANT=true
 CREATE TABLE IF NOT EXISTS GIGANTIC_TABLE (ID INTEGER PRIMARY KEY,unsig_id 
UNSIGNED_INT,big_id BIGINT,unsig_long_id UNSIGNED_LONG,tiny_id 
TINYINT,unsig_tiny_id UNSIGNED_TINYINT,small_id SMALLINT,unsig_small_id 
UNSIGNED_SMALLINT,float_id FLOAT,unsig_float_id UNSIGNED_FLOAT,double_id 
DOUBLE,unsig_double_id UNSIGNED_DOUBLE,decimal_id DECIMAL,boolean_id 
BOOLEAN,time_id TIME,date_id DATE,timestamp_id TIMESTAMP,unsig_time_id 
UNSIGNED_TIME,unsig_date_id UNSIGNED_DATE,unsig_timestamp_id 
UNSIGNED_TIMESTAMP,varchar_id VARCHAR (30),char_id CHAR (30),binary_id BINARY 
(100),varbinary_id VARBINARY (100))
   CREATE TABLE IF NOT EXISTS OUTPUT_GIGANTIC_TABLE (ID INTEGER PRIMARY 
KEY,unsig_id UNSIGNED_INT,big_id BIGINT,unsig_long_id UNSIGNED_LONG,tiny_id 
TINYINT,unsig_tiny_id UNSIGNED_TINYINT,small_id SMALLINT,unsig_small_id 
UNSIGNED_SMALLINT,float_id FLOAT,unsig_float_id UNSIGNED_FLOAT,double_id 
DOUBLE,unsig_double_id UNSIGNED_DOUBLE,decimal_id DECIMAL,boolean_id 
BOOLEAN,time_id TIME,date_id DATE,timestamp_id TIMESTAMP,unsig_time_id 
UNSIGNED_TIME,unsig_date_id UNSIGNED_DATE,unsig_timestamp_id 
UNSIGNED_TIMESTAMP,varchar_id VARCHAR (30),char_id CHAR (30),binary_id BINARY 
(100),varbinary_id VARBINARY (100))
-  upsert into GIGANTIC_TABLE 
values(0,2,3,4,-5,6,7,8,9.3,10.4,11.5,12.6,13.7,true,CURRENT_TIME(),CURRENT_DATE(),CURRENT_TIME(),CURRENT_TIME(),CURRENT_DATE(),CURRENT_TIME(),'This
 is random textA','a','a','a')
+upsert into GIGANTIC_TABLE 
values(0,2,3,4,-5,6,7,8,9.3,10.4,11.5,12.6,13.7,true,null,null,CURRENT_TIME(),CURRENT_TIME(),CURRENT_DATE(),CURRENT_TIME(),'This
 is random textA','a','a','a')

http://git-wip-us.apache.org/repos/asf/phoenix/blob/31f5e15c/phoenix-spark/src/main/scala/org/apache/phoenix/spark/PhoenixRDD.scala
--
diff --git 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/PhoenixRDD.scala 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/PhoenixRDD.scala
index 63547d2..2c2c6e1 100644
--- a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/PhoenixRDD.scala
+++ b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/PhoenixRDD.scala
@@ -134,9 +134,9 @@ class PhoenixRDD(sc: SparkContext, table: String, columns: 
Seq[String],
   val rowSeq = columns.map { case (name, sqlType) =>
 val res = pr.resultMap(name)
   // Special handling for data types
-  if (dateAsTimestamp && (sqlType == 91 || sqlType == 19)) { // 91 is 
the defined type for Date and 19 for UNSIGNED_DATE
+  if (dateAsTimestamp && (sqlType == 91 || sqlType == 19) && 
res!=null) { // 91 is the defined type for Date and 19 for UNSIGNED_DATE
 new java.sql.Timestamp(res.asInstanceOf[java.sql.Date].getTime)
-  } else if (sqlType == 92 || sqlType == 18) { // 92 is the defined 
type for Time and 18 for UNSIGNED_TIME
+  } else if ((sqlType == 92 || sqlType == 18) && res!=null) { // 92 is 
the defined type for Time and 18 for UNSIGNED_TIME
 new java.sql.Timestamp(res.asInstanceOf[java.sql.Time].getTime)
   } else {
 res



[2/3] phoenix git commit: PHOENIX-3792 Provide way to skip normalization of column names in phoenix-spark integration

2017-04-21 Thread ankit
PHOENIX-3792 Provide way to skip normalization of column names in phoenix-spark 
integration


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/fae0027d
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/fae0027d
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/fae0027d

Branch: refs/heads/4.x-HBase-1.1
Commit: fae0027d0adb93f8d18bee8e34db8e955dde8c36
Parents: 31f5e15
Author: Ankit Singhal 
Authored: Fri Apr 21 11:52:52 2017 +0530
Committer: Ankit Singhal 
Committed: Fri Apr 21 11:52:52 2017 +0530

--
 phoenix-spark/src/it/resources/globalSetup.sql  |  1 +
 .../apache/phoenix/spark/PhoenixSparkIT.scala   | 27 ++--
 .../phoenix/spark/DataFrameFunctions.scala  | 19 +++---
 .../apache/phoenix/spark/DefaultSource.scala|  2 +-
 4 files changed, 42 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/fae0027d/phoenix-spark/src/it/resources/globalSetup.sql
--
diff --git a/phoenix-spark/src/it/resources/globalSetup.sql 
b/phoenix-spark/src/it/resources/globalSetup.sql
index 28e0146..79d609e 100644
--- a/phoenix-spark/src/it/resources/globalSetup.sql
+++ b/phoenix-spark/src/it/resources/globalSetup.sql
@@ -17,6 +17,7 @@
 CREATE TABLE table1 (id BIGINT NOT NULL PRIMARY KEY, col1 VARCHAR)
 CREATE TABLE table1_copy (id BIGINT NOT NULL PRIMARY KEY, col1 VARCHAR)
 CREATE TABLE table2 (id BIGINT NOT NULL PRIMARY KEY, table1_id BIGINT, 
"t2col1" VARCHAR)
+CREATE TABLE table3 (id BIGINT NOT NULL PRIMARY KEY, table3_id BIGINT, 
"t2col1" VARCHAR)
 UPSERT INTO table1 (id, col1) VALUES (1, 'test_row_1')
 UPSERT INTO table2 (id, table1_id, "t2col1") VALUES (1, 1, 'test_child_1')
 UPSERT INTO table2 (id, table1_id, "t2col1") VALUES (2, 1, 'test_child_2')

http://git-wip-us.apache.org/repos/asf/phoenix/blob/fae0027d/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala
--
diff --git 
a/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala 
b/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala
index c9b98f2..97ff6f1 100644
--- a/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala
+++ b/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala
@@ -20,15 +20,38 @@ import org.apache.phoenix.util.{ColumnInfo, SchemaUtil}
 import org.apache.spark.sql.types._
 import org.apache.spark.sql.{Row, SQLContext, SaveMode}
 import org.joda.time.DateTime
-
+import org.apache.spark.{SparkConf, SparkContext}
 import scala.collection.mutable.ListBuffer
-
+import org.apache.hadoop.conf.Configuration
 /**
   * Note: If running directly from an IDE, these are the recommended VM 
parameters:
   * -Xmx1536m -XX:MaxPermSize=512m -XX:ReservedCodeCacheSize=512m
   */
 class PhoenixSparkIT extends AbstractPhoenixSparkIT {
 
+  test("Can persist data with case senstive columns (like in avro schema) 
using 'DataFrame.saveToPhoenix'") {
+val sqlContext = new SQLContext(sc)
+val df = sqlContext.createDataFrame(
+  Seq(
+(1, 1, "test_child_1"),
+(2, 1, "test_child_2"))).toDF("ID", "TABLE3_ID", "t2col1")
+df.saveToPhoenix("TABLE3", zkUrl = 
Some(quorumAddress),skipNormalizingIdentifier=true)
+
+// Verify results
+val stmt = conn.createStatement()
+val rs = stmt.executeQuery("SELECT * FROM TABLE3")
+
+val checkResults = List((1, 1, "test_child_1"), (2, 1, "test_child_2"))
+val results = ListBuffer[(Long, Long, String)]()
+while (rs.next()) {
+  results.append((rs.getLong(1), rs.getLong(2), rs.getString(3)))
+}
+stmt.close()
+
+results.toList shouldEqual checkResults
+
+  }
+  
   test("Can convert Phoenix schema") {
 val phoenixSchema = List(
   new ColumnInfo("varcharColumn", PVarchar.INSTANCE.getSqlType)

http://git-wip-us.apache.org/repos/asf/phoenix/blob/fae0027d/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
--
diff --git 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
index ddf4fab..92f4c58 100644
--- 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
+++ 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
@@ -24,13 +24,16 @@ import scala.collection.JavaConversions._
 
 
 class DataFrameFunctions(data: DataFrame) extends Serializable {
-
+  def saveToPhoenix(parameters: Map[String, String]): Unit = {
+   

[2/3] phoenix git commit: PHOENIX-3792 Provide way to skip normalization of column names in phoenix-spark integration

2017-04-21 Thread ankit
PHOENIX-3792 Provide way to skip normalization of column names in phoenix-spark 
integration


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/90e32c01
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/90e32c01
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/90e32c01

Branch: refs/heads/master
Commit: 90e32c015207b39330ed7496db7a73dbc7b634f4
Parents: 28af89c
Author: Ankit Singhal 
Authored: Fri Apr 21 11:48:16 2017 +0530
Committer: Ankit Singhal 
Committed: Fri Apr 21 11:48:16 2017 +0530

--
 phoenix-spark/src/it/resources/globalSetup.sql  |  1 +
 .../apache/phoenix/spark/PhoenixSparkIT.scala   | 27 ++--
 .../phoenix/spark/DataFrameFunctions.scala  | 19 +++---
 .../apache/phoenix/spark/DefaultSource.scala|  2 +-
 4 files changed, 42 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/90e32c01/phoenix-spark/src/it/resources/globalSetup.sql
--
diff --git a/phoenix-spark/src/it/resources/globalSetup.sql 
b/phoenix-spark/src/it/resources/globalSetup.sql
index dc24da7..7ac0039 100644
--- a/phoenix-spark/src/it/resources/globalSetup.sql
+++ b/phoenix-spark/src/it/resources/globalSetup.sql
@@ -17,6 +17,7 @@
 CREATE TABLE table1 (id BIGINT NOT NULL PRIMARY KEY, col1 VARCHAR)
 CREATE TABLE table1_copy (id BIGINT NOT NULL PRIMARY KEY, col1 VARCHAR)
 CREATE TABLE table2 (id BIGINT NOT NULL PRIMARY KEY, table1_id BIGINT, 
"t2col1" VARCHAR)
+CREATE TABLE table3 (id BIGINT NOT NULL PRIMARY KEY, table3_id BIGINT, 
"t2col1" VARCHAR)
 UPSERT INTO table1 (id, col1) VALUES (1, 'test_row_1')
 UPSERT INTO table2 (id, table1_id, "t2col1") VALUES (1, 1, 'test_child_1')
 UPSERT INTO table2 (id, table1_id, "t2col1") VALUES (2, 1, 'test_child_2')

http://git-wip-us.apache.org/repos/asf/phoenix/blob/90e32c01/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala
--
diff --git 
a/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala 
b/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala
index d53b5ee..b8e44fe 100644
--- a/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala
+++ b/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala
@@ -20,15 +20,38 @@ import org.apache.phoenix.util.{ColumnInfo, SchemaUtil}
 import org.apache.spark.sql.types._
 import org.apache.spark.sql.{Row, SQLContext, SaveMode}
 import org.joda.time.DateTime
-
+import org.apache.spark.{SparkConf, SparkContext}
 import scala.collection.mutable.ListBuffer
-
+import org.apache.hadoop.conf.Configuration
 /**
   * Note: If running directly from an IDE, these are the recommended VM 
parameters:
   * -Xmx1536m -XX:MaxPermSize=512m -XX:ReservedCodeCacheSize=512m
   */
 class PhoenixSparkIT extends AbstractPhoenixSparkIT {
 
+  test("Can persist data with case senstive columns (like in avro schema) 
using 'DataFrame.saveToPhoenix'") {
+val sqlContext = new SQLContext(sc)
+val df = sqlContext.createDataFrame(
+  Seq(
+(1, 1, "test_child_1"),
+(2, 1, "test_child_2"))).toDF("ID", "TABLE3_ID", "t2col1")
+df.saveToPhoenix("TABLE3", zkUrl = 
Some(quorumAddress),skipNormalizingIdentifier=true)
+
+// Verify results
+val stmt = conn.createStatement()
+val rs = stmt.executeQuery("SELECT * FROM TABLE3")
+
+val checkResults = List((1, 1, "test_child_1"), (2, 1, "test_child_2"))
+val results = ListBuffer[(Long, Long, String)]()
+while (rs.next()) {
+  results.append((rs.getLong(1), rs.getLong(2), rs.getString(3)))
+}
+stmt.close()
+
+results.toList shouldEqual checkResults
+
+  }
+  
   test("Can convert Phoenix schema") {
 val phoenixSchema = List(
   new ColumnInfo("varcharColumn", PVarchar.INSTANCE.getSqlType)

http://git-wip-us.apache.org/repos/asf/phoenix/blob/90e32c01/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
--
diff --git 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
index ddf4fab..92f4c58 100644
--- 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
+++ 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/DataFrameFunctions.scala
@@ -24,13 +24,16 @@ import scala.collection.JavaConversions._
 
 
 class DataFrameFunctions(data: DataFrame) extends Serializable {
-
+  def saveToPhoenix(parameters: Map[String, String]): Unit = {
+   

[1/3] phoenix git commit: PHOENIX-3751 spark 2.1 with Phoenix 4.10 load data as dataframe fail, NullPointerException

2017-04-21 Thread ankit
Repository: phoenix
Updated Branches:
  refs/heads/master 679ff21b7 -> 92b951e53


PHOENIX-3751 spark 2.1 with Phoenix 4.10 load data as dataframe fail, 
NullPointerException


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/28af89c4
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/28af89c4
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/28af89c4

Branch: refs/heads/master
Commit: 28af89c46fa54d7f60adc8be88fdf559cad811d2
Parents: 679ff21
Author: Ankit Singhal 
Authored: Fri Apr 21 11:47:27 2017 +0530
Committer: Ankit Singhal 
Committed: Fri Apr 21 11:47:27 2017 +0530

--
 phoenix-spark/src/it/resources/globalSetup.sql   | 2 +-
 .../src/main/scala/org/apache/phoenix/spark/PhoenixRDD.scala | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/28af89c4/phoenix-spark/src/it/resources/globalSetup.sql
--
diff --git a/phoenix-spark/src/it/resources/globalSetup.sql 
b/phoenix-spark/src/it/resources/globalSetup.sql
index 28eb0f7..dc24da7 100644
--- a/phoenix-spark/src/it/resources/globalSetup.sql
+++ b/phoenix-spark/src/it/resources/globalSetup.sql
@@ -60,4 +60,4 @@ UPSERT INTO "small" VALUES ('key3', 'xyz', 3)
 CREATE TABLE MULTITENANT_TEST_TABLE (TENANT_ID VARCHAR NOT NULL, 
ORGANIZATION_ID VARCHAR, GLOBAL_COL1 VARCHAR  CONSTRAINT pk PRIMARY KEY 
(TENANT_ID, ORGANIZATION_ID)) MULTI_TENANT=true
 CREATE TABLE IF NOT EXISTS GIGANTIC_TABLE (ID INTEGER PRIMARY KEY,unsig_id 
UNSIGNED_INT,big_id BIGINT,unsig_long_id UNSIGNED_LONG,tiny_id 
TINYINT,unsig_tiny_id UNSIGNED_TINYINT,small_id SMALLINT,unsig_small_id 
UNSIGNED_SMALLINT,float_id FLOAT,unsig_float_id UNSIGNED_FLOAT,double_id 
DOUBLE,unsig_double_id UNSIGNED_DOUBLE,decimal_id DECIMAL,boolean_id 
BOOLEAN,time_id TIME,date_id DATE,timestamp_id TIMESTAMP,unsig_time_id 
UNSIGNED_TIME,unsig_date_id UNSIGNED_DATE,unsig_timestamp_id 
UNSIGNED_TIMESTAMP,varchar_id VARCHAR (30),char_id CHAR (30),binary_id BINARY 
(100),varbinary_id VARBINARY (100))
  CREATE TABLE IF NOT EXISTS OUTPUT_GIGANTIC_TABLE (ID INTEGER PRIMARY 
KEY,unsig_id UNSIGNED_INT,big_id BIGINT,unsig_long_id UNSIGNED_LONG,tiny_id 
TINYINT,unsig_tiny_id UNSIGNED_TINYINT,small_id SMALLINT,unsig_small_id 
UNSIGNED_SMALLINT,float_id FLOAT,unsig_float_id UNSIGNED_FLOAT,double_id 
DOUBLE,unsig_double_id UNSIGNED_DOUBLE,decimal_id DECIMAL,boolean_id 
BOOLEAN,time_id TIME,date_id DATE,timestamp_id TIMESTAMP,unsig_time_id 
UNSIGNED_TIME,unsig_date_id UNSIGNED_DATE,unsig_timestamp_id 
UNSIGNED_TIMESTAMP,varchar_id VARCHAR (30),char_id CHAR (30),binary_id BINARY 
(100),varbinary_id VARBINARY (100))
- upsert into GIGANTIC_TABLE 
values(0,2,3,4,-5,6,7,8,9.3,10.4,11.5,12.6,13.7,true,CURRENT_TIME(),CURRENT_DATE(),CURRENT_TIME(),CURRENT_TIME(),CURRENT_DATE(),CURRENT_TIME(),'This
 is random textA','a','a','a')
+ upsert into GIGANTIC_TABLE 
values(0,2,3,4,-5,6,7,8,9.3,10.4,11.5,12.6,13.7,true,null,null,CURRENT_TIME(),CURRENT_TIME(),CURRENT_DATE(),CURRENT_TIME(),'This
 is random textA','a','a','a')

http://git-wip-us.apache.org/repos/asf/phoenix/blob/28af89c4/phoenix-spark/src/main/scala/org/apache/phoenix/spark/PhoenixRDD.scala
--
diff --git 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/PhoenixRDD.scala 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/PhoenixRDD.scala
index 63547d2..2c2c6e1 100644
--- a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/PhoenixRDD.scala
+++ b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/PhoenixRDD.scala
@@ -134,9 +134,9 @@ class PhoenixRDD(sc: SparkContext, table: String, columns: 
Seq[String],
   val rowSeq = columns.map { case (name, sqlType) =>
 val res = pr.resultMap(name)
   // Special handling for data types
-  if (dateAsTimestamp && (sqlType == 91 || sqlType == 19)) { // 91 is 
the defined type for Date and 19 for UNSIGNED_DATE
+  if (dateAsTimestamp && (sqlType == 91 || sqlType == 19) && 
res!=null) { // 91 is the defined type for Date and 19 for UNSIGNED_DATE
 new java.sql.Timestamp(res.asInstanceOf[java.sql.Date].getTime)
-  } else if (sqlType == 92 || sqlType == 18) { // 92 is the defined 
type for Time and 18 for UNSIGNED_TIME
+  } else if ((sqlType == 92 || sqlType == 18) && res!=null) { // 92 is 
the defined type for Time and 18 for UNSIGNED_TIME
 new java.sql.Timestamp(res.asInstanceOf[java.sql.Time].getTime)
   } else {
 res