[jira] [Resolved] (PHOENIX-4472) Altering properties in the table descriptor is not working properly.

2017-12-21 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-4472.

Resolution: Fixed

> Altering properties in the table descriptor is not working properly.
> 
>
> Key: PHOENIX-4472
> URL: https://issues.apache.org/jira/browse/PHOENIX-4472
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Rajeshbabu Chintaguntla
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4472.patch
>
>
> Seems to be a side effect of PHOENIX-4304.
> {code}
> //unable to alter properties in descriptor
> [ERROR] 
> testAddingPkColAndSettingProperties[AlterTableIT_columnEncoded=false](org.apache.phoenix.end2end.AlterTableIT)
>   Time elapsed: 8.507 s  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
>   at 
> org.apache.phoenix.end2end.AlterTableIT.testAddingPkColAndSettingProperties(AlterTableIT.java:945)
> [ERROR] 
> testAddingPkColAndSettingProperties[AlterTableIT_columnEncoded=true](org.apache.phoenix.end2end.AlterTableIT)
>   Time elapsed: 8.495 s  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
>   at 
> org.apache.phoenix.end2end.AlterTableIT.testAddingPkColAndSettingProperties(AlterTableIT.java:945)
> //PhoenixTransactionalProcessor is not added in the descriptor
> [ERROR] 
> testMakeBaseTableTransactional[AlterTableWithViewsIT_multiTenant=false, 
> columnEncoded=false](org.apache.phoenix.end2end.AlterTableWithViewsIT)  Time 
> elapsed: 4.738 s  <<< FAILURE!
> java.lang.AssertionError
>   at 
> org.apache.phoenix.end2end.AlterTableWithViewsIT.testMakeBaseTableTransactional(AlterTableWithViewsIT.java:760)
> [ERROR] 
> testMakeBaseTableTransactional[AlterTableWithViewsIT_multiTenant=false, 
> columnEncoded=true](org.apache.phoenix.end2end.AlterTableWithViewsIT)  Time 
> elapsed: 3.675 s  <<< FAILURE!
> java.lang.AssertionError
>   at 
> org.apache.phoenix.end2end.AlterTableWithViewsIT.testMakeBaseTableTransactional(AlterTableWithViewsIT.java:760)
> [ERROR] 
> testMakeBaseTableTransactional[AlterTableWithViewsIT_multiTenant=true, 
> columnEncoded=false](org.apache.phoenix.end2end.AlterTableWithViewsIT)  Time 
> elapsed: 4.758 s  <<< ERROR!
> java.lang.IllegalArgumentException: Family '0' already exists so cannot be 
> added
>   at 
> org.apache.phoenix.end2end.AlterTableWithViewsIT.testMakeBaseTableTransactional(AlterTableWithViewsIT.java:756)
> [ERROR] 
> testMakeBaseTableTransactional[AlterTableWithViewsIT_multiTenant=true, 
> columnEncoded=true](org.apache.phoenix.end2end.AlterTableWithViewsIT)  Time 
> elapsed: 4.805 s  <<< ERROR!
> java.lang.IllegalArgumentException: Family '0' already exists so cannot be 
> added
>   at 
> org.apache.phoenix.end2end.AlterTableWithViewsIT.testMakeBaseTableTransactional(AlterTableWithViewsIT.java:756)
>   
> [ERROR] Tests run: 32, Failures: 16, Errors: 0, Skipped: 0, Time elapsed: 
> 195.018 s <<< FAILURE! - in 
> org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT
> [ERROR] 
> testSettingPropertiesWhenTableHasDefaultColFamilySpecified[SetPropertyOnEncodedTableIT](org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT)
>   Time elapsed: 3.574 s  <<< FAILURE!
> java.lang.AssertionError: expected:<1> but was:<0>
> [ERROR] 
> testSetPropertyAndAddColumnForDefaultColumnFamily[SetPropertyOnEncodedTableIT](org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT)
>   Time elapsed: 8.596 s  <<< FAILURE!
> java.lang.AssertionError
> [ERROR] 
> testSetHColumnPropertyForTableWithOnlyPKCols1[SetPropertyOnEncodedTableIT](org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT)
>   Time elapsed: 9.484 s  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> [ERROR] 
> testSetHColumnPropertyForTableWithOnlyPKCols2[SetPropertyOnEncodedTableIT](org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT)
>   Time elapsed: 9.494 s  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> [ERROR] 
> testSetPropertyAndAddColumnForExistingColumnFamily[SetPropertyOnEncodedTableIT](org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT)
>   Time elapsed: 8.592 s  <<< FAILURE!
> java.lang.AssertionError
> [ERROR] 
> testSetHTableAndHColumnProperties[SetPropertyOnEncodedTableIT](org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT)
>   Time elapsed: 3.482 s  <<< FAILURE!
> java.lang.AssertionError: expected:<1> but was:<0>
> [ERROR] 
> testSetPropertyAndAddColumnUsingDefaultColumnFamilySpecifier[SetPropertyOnEncodedTableIT](org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT)
>   Time elapsed: 15.031 s  <<< FAILURE!
> java.lang.AssertionError
> [ERROR] 
> testTTLAssignmentForNewEmptyCF[SetPropertyOnEncodedTableIT](org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT)
>   Time elapsed: 16.725 s  <<< FA

[jira] [Updated] (PHOENIX-4472) Altering properties in the table descriptor is not working properly.

2017-12-21 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4472:
---
Attachment: PHOENIX-4472.patch

> Altering properties in the table descriptor is not working properly.
> 
>
> Key: PHOENIX-4472
> URL: https://issues.apache.org/jira/browse/PHOENIX-4472
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Rajeshbabu Chintaguntla
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4472.patch
>
>
> Seems to be a side effect of PHOENIX-4304.
> {code}
> //unable to alter properties in descriptor
> [ERROR] 
> testAddingPkColAndSettingProperties[AlterTableIT_columnEncoded=false](org.apache.phoenix.end2end.AlterTableIT)
>   Time elapsed: 8.507 s  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
>   at 
> org.apache.phoenix.end2end.AlterTableIT.testAddingPkColAndSettingProperties(AlterTableIT.java:945)
> [ERROR] 
> testAddingPkColAndSettingProperties[AlterTableIT_columnEncoded=true](org.apache.phoenix.end2end.AlterTableIT)
>   Time elapsed: 8.495 s  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
>   at 
> org.apache.phoenix.end2end.AlterTableIT.testAddingPkColAndSettingProperties(AlterTableIT.java:945)
> //PhoenixTransactionalProcessor is not added in the descriptor
> [ERROR] 
> testMakeBaseTableTransactional[AlterTableWithViewsIT_multiTenant=false, 
> columnEncoded=false](org.apache.phoenix.end2end.AlterTableWithViewsIT)  Time 
> elapsed: 4.738 s  <<< FAILURE!
> java.lang.AssertionError
>   at 
> org.apache.phoenix.end2end.AlterTableWithViewsIT.testMakeBaseTableTransactional(AlterTableWithViewsIT.java:760)
> [ERROR] 
> testMakeBaseTableTransactional[AlterTableWithViewsIT_multiTenant=false, 
> columnEncoded=true](org.apache.phoenix.end2end.AlterTableWithViewsIT)  Time 
> elapsed: 3.675 s  <<< FAILURE!
> java.lang.AssertionError
>   at 
> org.apache.phoenix.end2end.AlterTableWithViewsIT.testMakeBaseTableTransactional(AlterTableWithViewsIT.java:760)
> [ERROR] 
> testMakeBaseTableTransactional[AlterTableWithViewsIT_multiTenant=true, 
> columnEncoded=false](org.apache.phoenix.end2end.AlterTableWithViewsIT)  Time 
> elapsed: 4.758 s  <<< ERROR!
> java.lang.IllegalArgumentException: Family '0' already exists so cannot be 
> added
>   at 
> org.apache.phoenix.end2end.AlterTableWithViewsIT.testMakeBaseTableTransactional(AlterTableWithViewsIT.java:756)
> [ERROR] 
> testMakeBaseTableTransactional[AlterTableWithViewsIT_multiTenant=true, 
> columnEncoded=true](org.apache.phoenix.end2end.AlterTableWithViewsIT)  Time 
> elapsed: 4.805 s  <<< ERROR!
> java.lang.IllegalArgumentException: Family '0' already exists so cannot be 
> added
>   at 
> org.apache.phoenix.end2end.AlterTableWithViewsIT.testMakeBaseTableTransactional(AlterTableWithViewsIT.java:756)
>   
> [ERROR] Tests run: 32, Failures: 16, Errors: 0, Skipped: 0, Time elapsed: 
> 195.018 s <<< FAILURE! - in 
> org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT
> [ERROR] 
> testSettingPropertiesWhenTableHasDefaultColFamilySpecified[SetPropertyOnEncodedTableIT](org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT)
>   Time elapsed: 3.574 s  <<< FAILURE!
> java.lang.AssertionError: expected:<1> but was:<0>
> [ERROR] 
> testSetPropertyAndAddColumnForDefaultColumnFamily[SetPropertyOnEncodedTableIT](org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT)
>   Time elapsed: 8.596 s  <<< FAILURE!
> java.lang.AssertionError
> [ERROR] 
> testSetHColumnPropertyForTableWithOnlyPKCols1[SetPropertyOnEncodedTableIT](org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT)
>   Time elapsed: 9.484 s  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> [ERROR] 
> testSetHColumnPropertyForTableWithOnlyPKCols2[SetPropertyOnEncodedTableIT](org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT)
>   Time elapsed: 9.494 s  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> [ERROR] 
> testSetPropertyAndAddColumnForExistingColumnFamily[SetPropertyOnEncodedTableIT](org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT)
>   Time elapsed: 8.592 s  <<< FAILURE!
> java.lang.AssertionError
> [ERROR] 
> testSetHTableAndHColumnProperties[SetPropertyOnEncodedTableIT](org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT)
>   Time elapsed: 3.482 s  <<< FAILURE!
> java.lang.AssertionError: expected:<1> but was:<0>
> [ERROR] 
> testSetPropertyAndAddColumnUsingDefaultColumnFamilySpecifier[SetPropertyOnEncodedTableIT](org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT)
>   Time elapsed: 15.031 s  <<< FAILURE!
> java.lang.AssertionError
> [ERROR] 
> testTTLAssignmentForNewEmptyCF[SetPropertyOnEncodedTableIT](org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT)
>   Time elapsed: 16.72

[jira] [Commented] (PHOENIX-4472) Altering properties in the table descriptor is not working properly.

2017-12-21 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301068#comment-16301068
 ] 

Ankit Singhal commented on PHOENIX-4472:


The attached patch is fixing the remaining issue with 
AlterTableWithViewIT.testMakeBaseTableTransactional().

> Altering properties in the table descriptor is not working properly.
> 
>
> Key: PHOENIX-4472
> URL: https://issues.apache.org/jira/browse/PHOENIX-4472
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Rajeshbabu Chintaguntla
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
>
> Seems to be a side effect of PHOENIX-4304.
> {code}
> //unable to alter properties in descriptor
> [ERROR] 
> testAddingPkColAndSettingProperties[AlterTableIT_columnEncoded=false](org.apache.phoenix.end2end.AlterTableIT)
>   Time elapsed: 8.507 s  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
>   at 
> org.apache.phoenix.end2end.AlterTableIT.testAddingPkColAndSettingProperties(AlterTableIT.java:945)
> [ERROR] 
> testAddingPkColAndSettingProperties[AlterTableIT_columnEncoded=true](org.apache.phoenix.end2end.AlterTableIT)
>   Time elapsed: 8.495 s  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
>   at 
> org.apache.phoenix.end2end.AlterTableIT.testAddingPkColAndSettingProperties(AlterTableIT.java:945)
> //PhoenixTransactionalProcessor is not added in the descriptor
> [ERROR] 
> testMakeBaseTableTransactional[AlterTableWithViewsIT_multiTenant=false, 
> columnEncoded=false](org.apache.phoenix.end2end.AlterTableWithViewsIT)  Time 
> elapsed: 4.738 s  <<< FAILURE!
> java.lang.AssertionError
>   at 
> org.apache.phoenix.end2end.AlterTableWithViewsIT.testMakeBaseTableTransactional(AlterTableWithViewsIT.java:760)
> [ERROR] 
> testMakeBaseTableTransactional[AlterTableWithViewsIT_multiTenant=false, 
> columnEncoded=true](org.apache.phoenix.end2end.AlterTableWithViewsIT)  Time 
> elapsed: 3.675 s  <<< FAILURE!
> java.lang.AssertionError
>   at 
> org.apache.phoenix.end2end.AlterTableWithViewsIT.testMakeBaseTableTransactional(AlterTableWithViewsIT.java:760)
> [ERROR] 
> testMakeBaseTableTransactional[AlterTableWithViewsIT_multiTenant=true, 
> columnEncoded=false](org.apache.phoenix.end2end.AlterTableWithViewsIT)  Time 
> elapsed: 4.758 s  <<< ERROR!
> java.lang.IllegalArgumentException: Family '0' already exists so cannot be 
> added
>   at 
> org.apache.phoenix.end2end.AlterTableWithViewsIT.testMakeBaseTableTransactional(AlterTableWithViewsIT.java:756)
> [ERROR] 
> testMakeBaseTableTransactional[AlterTableWithViewsIT_multiTenant=true, 
> columnEncoded=true](org.apache.phoenix.end2end.AlterTableWithViewsIT)  Time 
> elapsed: 4.805 s  <<< ERROR!
> java.lang.IllegalArgumentException: Family '0' already exists so cannot be 
> added
>   at 
> org.apache.phoenix.end2end.AlterTableWithViewsIT.testMakeBaseTableTransactional(AlterTableWithViewsIT.java:756)
>   
> [ERROR] Tests run: 32, Failures: 16, Errors: 0, Skipped: 0, Time elapsed: 
> 195.018 s <<< FAILURE! - in 
> org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT
> [ERROR] 
> testSettingPropertiesWhenTableHasDefaultColFamilySpecified[SetPropertyOnEncodedTableIT](org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT)
>   Time elapsed: 3.574 s  <<< FAILURE!
> java.lang.AssertionError: expected:<1> but was:<0>
> [ERROR] 
> testSetPropertyAndAddColumnForDefaultColumnFamily[SetPropertyOnEncodedTableIT](org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT)
>   Time elapsed: 8.596 s  <<< FAILURE!
> java.lang.AssertionError
> [ERROR] 
> testSetHColumnPropertyForTableWithOnlyPKCols1[SetPropertyOnEncodedTableIT](org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT)
>   Time elapsed: 9.484 s  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> [ERROR] 
> testSetHColumnPropertyForTableWithOnlyPKCols2[SetPropertyOnEncodedTableIT](org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT)
>   Time elapsed: 9.494 s  <<< FAILURE!
> java.lang.AssertionError: expected: but was:
> [ERROR] 
> testSetPropertyAndAddColumnForExistingColumnFamily[SetPropertyOnEncodedTableIT](org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT)
>   Time elapsed: 8.592 s  <<< FAILURE!
> java.lang.AssertionError
> [ERROR] 
> testSetHTableAndHColumnProperties[SetPropertyOnEncodedTableIT](org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT)
>   Time elapsed: 3.482 s  <<< FAILURE!
> java.lang.AssertionError: expected:<1> but was:<0>
> [ERROR] 
> testSetPropertyAndAddColumnUsingDefaultColumnFamilySpecifier[SetPropertyOnEncodedTableIT](org.apache.phoenix.end2end.SetPropertyOnEncodedTableIT)
>   Time elapsed: 15.031 s  <<< FAILURE!
> java.lang.AssertionError
> [ERROR] 
> testTTLAssignmentForNewEmptyCF[SetPropertyOnEncode

[jira] [Updated] (PHOENIX-4483) Fix ImmutableIndexIT test failures when transactions enabled.

2017-12-21 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4483:
---
Parent Issue: PHOENIX-4480  (was: PHOENIX-4338)

> Fix ImmutableIndexIT test failures when transactions enabled.
> -
>
> Key: PHOENIX-4483
> URL: https://issues.apache.org/jira/browse/PHOENIX-4483
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
>
> {noformat}
> at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testDeleteFromNonPK(ImmutableIndexIT.java:228)
> [ERROR] 
> testDeleteFromPartialPK[ImmutableIndexIT_localIndex=false,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 4.646 s  <<< FAILURE!
> java.lang.AssertionError: expected:<3> but was:<0>
> at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testDeleteFromPartialPK(ImmutableIndexIT.java:186)
> [ERROR] 
> testDropIfImmutableKeyValueColumn[ImmutableIndexIT_localIndex=false,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 4.657 s  <<< FAILURE!
> java.lang.AssertionError: expected:<3> but was:<0>
> at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testDropIfImmutableKeyValueColumn(ImmutableIndexIT.java:140)
> [ERROR] 
> testDeleteFromNonPK[ImmutableIndexIT_localIndex=false,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 4.658 s  <<< FAILURE!
> java.lang.AssertionError: expected:<3> but was:<0>
> at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testDeleteFromNonPK(ImmutableIndexIT.java:228)
> [ERROR] 
> testDeleteFromPartialPK[ImmutableIndexIT_localIndex=true,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 9.694 s  <<< FAILURE!
> java.lang.AssertionError: expected:<3> but was:<0>
> at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testDeleteFromPartialPK(ImmutableIndexIT.java:186)
> [ERROR] 
> testDropIfImmutableKeyValueColumn[ImmutableIndexIT_localIndex=true,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 9.812 s  <<< FAILURE!
> java.lang.AssertionError: expected:<3> but was:<0>
> at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testDropIfImmutableKeyValueColumn(ImmutableIndexIT.java:140)
> [ERROR] 
> testDeleteFromNonPK[ImmutableIndexIT_localIndex=true,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 11.728 s  <<< FAILURE!
> java.lang.AssertionError: expected:<3> but was:<0>
> at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testDeleteFromNonPK(ImmutableIndexIT.java:228)
> [ERROR] 
> testDeleteFromPartialPK[ImmutableIndexIT_localIndex=true,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 9.683 s  <<< FAILURE!
> java.lang.AssertionError: expected:<3> but was:<0>
> at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testDeleteFromPartialPK(ImmutableIndexIT.java:186)
> [ERROR] 
> testDropIfImmutableKeyValueColumn[ImmutableIndexIT_localIndex=true,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 9.965 s  <<< FAILURE!
> java.lang.AssertionError: expected:<3> but was:<0>
> at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testDropIfImmutableKeyValueColumn(ImmutableIndexIT.java:140)
> [ERROR] 
> testDeleteFromNonPK[ImmutableIndexIT_localIndex=true,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 9.665 s  <<< FAILURE!
> java.lang.AssertionError: expected:<3> but was:<0>
> at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testDeleteFromNonPK(ImmutableIndexIT.java:228)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (PHOENIX-4486) Fix metric ITs

2017-12-21 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-4486.

Resolution: Duplicate

> Fix metric ITs 
> ---
>
> Key: PHOENIX-4486
> URL: https://issues.apache.org/jira/browse/PHOENIX-4486
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
>
> PhoenixMetricsIT
> PartialCommitIT



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4486) Fix metric ITs

2017-12-21 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301040#comment-16301040
 ] 

Ankit Singhal commented on PHOENIX-4486:


Oops. Duplicate of PHOENIX-4479.

> Fix metric ITs 
> ---
>
> Key: PHOENIX-4486
> URL: https://issues.apache.org/jira/browse/PHOENIX-4486
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
>
> PhoenixMetricsIT
> PartialCommitIT



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4486) Fix metric ITs

2017-12-21 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-4486:
--

 Summary: Fix metric ITs 
 Key: PHOENIX-4486
 URL: https://issues.apache.org/jira/browse/PHOENIX-4486
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Ankit Singhal
Assignee: Ankit Singhal
 Fix For: 5.0.0


PhoenixMetricsIT
PartialCommitIT



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (PHOENIX-4485) Fix CsvBulkLoadToolIT for RowTimestamp table upload.

2017-12-21 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-4485.

Resolution: Fixed

> Fix CsvBulkLoadToolIT for RowTimestamp table upload.
> 
>
> Key: PHOENIX-4485
> URL: https://issues.apache.org/jira/browse/PHOENIX-4485
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4485.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4485) Fix CsvBulkLoadToolIT for RowTimestamp table upload.

2017-12-21 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4485:
---
Attachment: PHOENIX-4485.patch

> Fix CsvBulkLoadToolIT for RowTimestamp table upload.
> 
>
> Key: PHOENIX-4485
> URL: https://issues.apache.org/jira/browse/PHOENIX-4485
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4485.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4485) Fix CsvBulkLoadToolIT for RowTimestamp table upload.

2017-12-21 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-4485:
--

 Summary: Fix CsvBulkLoadToolIT for RowTimestamp table upload.
 Key: PHOENIX-4485
 URL: https://issues.apache.org/jira/browse/PHOENIX-4485
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Ankit Singhal
Assignee: Ankit Singhal
 Fix For: 5.0.0






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4382) Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator byte return null in query results

2017-12-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300904#comment-16300904
 ] 

Hadoop QA commented on PHOENIX-4382:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12903332/PHOENIX-4382.v3.master.patch
  against master branch at commit 412329a7415302831954891285d291055328c28b.
  ATTACHMENT ID: 12903332

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+table.getImmutableStorageScheme() == 
ImmutableStorageScheme.SINGLE_CELL_ARRAY_WITH_OFFSETS
+private  void testValues(boolean immutable, PDataType 
dataType, List testData) throws Exception {
+public SingleCellColumnExpression(PColumn column, String displayName, 
QualifierEncodingScheme encodingScheme, ImmutableStorageScheme 
immutableStorageScheme) {
+}, dataColRef.getFamily(), dataColRef.getQualifier(), 
encodingScheme, immutableStorageScheme);
+KeyValueColumnExpression kvExp = scheme != 
PTable.ImmutableStorageScheme.ONE_CELL_PER_COLUMN ? new 
SingleCellColumnExpression(scheme)
+return new PArrayDataTypeEncoder(byteStream, oStream, 
numElements, type, SortOrder.ASC, false, getSerializationVersion());
+// array serialization format where bytes are immutable (does not support 
prepend/append or sorting)
+if (serializationVersion == IMMUTABLE_SERIALIZATION_VERSION || 
serializationVersion == IMMUTABLE_SERIALIZATION_V2) {
+if (isNullValue(arrayIndex, bytes, initPos, 
serializationVersion, useShort, indexOffset, currOffset, elementLength)) {
+int separatorBytes =  serializationVersion == 
PArrayDataType.SORTABLE_SERIALIZATION_VERSION ? 3 : 0;

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1685//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1685//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1685//console

This message is automatically generated.

> Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator 
> byte return null in query results
> ---
>
> Key: PHOENIX-4382
> URL: https://issues.apache.org/jira/browse/PHOENIX-4382
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: PHOENIX-4382.v1.master.patch, 
> PHOENIX-4382.v2.master.patch, PHOENIX-4382.v3.master.patch, 
> UpsertBigValuesIT.java
>
>
> For immutable tables, upsert of some values like Short.MAX_VALUE results in a 
> null value in query resultsets.  Mutable tables are not affected.  I tried 
> with BigInt and got the same problem.
> For Short, the breaking point seems to be 32512.
> This is happening because of the way we serialize nulls.  For nulls, we write 
> out [separatorByte, #_of_nulls].  However, some data values, like 
> Short.MAX_VALUE, start with separatorByte, we can't distinguish between a 
> null and these values.  Currently the code assumes it's a null when it sees a 
> leading separatorByte, hence the incorrect query results.
> See attached test - testShort() , testBigInt()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4382) Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator byte return null in query results

2017-12-21 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4382:
--
Attachment: PHOENIX-4382.v3.master.patch

v3 patch that adds a test for single byte values, some integers that start with 
separatorByte, and tests if we can differentiate {separatorByte, 2} from two 
nulls.

I also renamed a method for clarity.

> Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator 
> byte return null in query results
> ---
>
> Key: PHOENIX-4382
> URL: https://issues.apache.org/jira/browse/PHOENIX-4382
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: PHOENIX-4382.v1.master.patch, 
> PHOENIX-4382.v2.master.patch, PHOENIX-4382.v3.master.patch, 
> UpsertBigValuesIT.java
>
>
> For immutable tables, upsert of some values like Short.MAX_VALUE results in a 
> null value in query resultsets.  Mutable tables are not affected.  I tried 
> with BigInt and got the same problem.
> For Short, the breaking point seems to be 32512.
> This is happening because of the way we serialize nulls.  For nulls, we write 
> out [separatorByte, #_of_nulls].  However, some data values, like 
> Short.MAX_VALUE, start with separatorByte, we can't distinguish between a 
> null and these values.  Currently the code assumes it's a null when it sees a 
> leading separatorByte, hence the incorrect query results.
> See attached test - testShort() , testBigInt()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Questions regarding interacting with PQS using C# and Protobufs

2017-12-21 Thread YoungWoo Kim
https://github.com/Azure/hdinsight-phoenix-sharp

Might be a good example for you.

- Youngwoo

2017년 12월 22일 (금) 오전 7:02, Chinmay Kulkarni 님이
작성:

> Hi all,
>
> I am trying to create a simple .net client to query data in HBase via
> Phoenix using the Phoenix Query Server and am sort of struggling to find
> documentation or examples for doing the same.
>
> My understanding is that I can do this by sending POST requests to PQS in
> which I send data using the protobuf format. Is this correct? Apache
> Calcite's documentation also mentions using WireMessage APIs to achieve the
> same. Can you please point me towards some resources to help me use
> WireMessage in .net?
>
> Thanks,
> Chinmay
>


Questions regarding interacting with PQS using C# and Protobufs

2017-12-21 Thread Chinmay Kulkarni
Hi all,

I am trying to create a simple .net client to query data in HBase via
Phoenix using the Phoenix Query Server and am sort of struggling to find
documentation or examples for doing the same.

My understanding is that I can do this by sending POST requests to PQS in
which I send data using the protobuf format. Is this correct? Apache
Calcite's documentation also mentions using WireMessage APIs to achieve the
same. Can you please point me towards some resources to help me use
WireMessage in .net?

Thanks,
Chinmay


[jira] [Commented] (PHOENIX-4398) Change QueryCompiler get column expressions process from serial to parallel.

2017-12-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300555#comment-16300555
 ] 

Hadoop QA commented on PHOENIX-4398:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12903294/PHOENIX-4398_V1.patch
  against master branch at commit 9355a4d262d31d8d65e1467bcc351bb99760e11d.
  ATTACHMENT ID: 12903294

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+private static Configuration config = 
HBaseFactoryProvider.getConfigurationFactory().getConfiguration();
+private static boolean use_compile_parallel = 
config.getBoolean(USE_COMPILE_COLUMN_EXPRESSION_PARALLEL,
+expressions[i++] = ((ProjectedColumn) 
column).getSourceColumnRef().newColumnExpression();
+return new ExpressionOrder(((ProjectedColumn) 
column).getSourceColumnRef().newColumnExpression(), order);
+public static final String USE_COMPILE_COLUMN_EXPRESSION_PARALLEL = 
"phoenix.use.columnexpression.parallel";
+public static final String COMPILE_COLUMN_EXPRESSION_PARALLEL_THREAD = 
"phoenix.columnexpression.parallel.thread";

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1684//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1684//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1684//console

This message is automatically generated.

> Change QueryCompiler get column expressions process from serial to parallel.
> 
>
> Key: PHOENIX-4398
> URL: https://issues.apache.org/jira/browse/PHOENIX-4398
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.11.0, 4.13.0
>Reporter: Albert Lee
> Fix For: 4.11.0, 4.13.0
>
> Attachments: PHOENIX-4398.patch, PHOENIX-4398_V1.patch
>
>
> When QueryCompiler compile a select sql, the process of getting column 
> expressions is a serial process. The performance is ok when the table is 
> narrow. But when compile a wide table(e.g. 130 columns in my use case), The 
> time-consuming of this step is very high, over 70ms. So I change 
> TupleProjector(PTable projectedTable) from serial for loop to parallel future.
> Because this is just modify code performance, not add new feture, so there is 
> no unit test.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4437) Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and pull optimize() out of getExplainPlan()

2017-12-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300524#comment-16300524
 ] 

Hudson commented on PHOENIX-4437:
-

SUCCESS: Integrated in Jenkins build Phoenix-master #1900 (See 
[https://builds.apache.org/job/Phoenix-master/1900/])
PHOENIX-4437 Make QueryPlan.getEstimatedBytesToScan() independent of 
(maryannxue: rev 412329a7415302831954891285d291055328c28b)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/BaseQueryPlan.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/execute/HashJoinPlan.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/execute/UnionPlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixStatement.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ExplainPlanWithStatsEnabledIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/SortMergeJoinPlan.java


> Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and 
> pull optimize() out of getExplainPlan()
> 
>
> Key: PHOENIX-4437
> URL: https://issues.apache.org/jira/browse/PHOENIX-4437
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.11.0
>Reporter: Maryann Xue
>Assignee: Maryann Xue
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4437.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4370) Surface hbase metrics from perconnection to global metrics

2017-12-21 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300517#comment-16300517
 ] 

ASF GitHub Bot commented on PHOENIX-4370:
-

GitHub user aertoria opened a pull request:

https://github.com/apache/phoenix/pull/287

PHOENIX-4370 Surface hbase metrics from perconnection to global metrics

PHOENIX-4370 Surface hbase metrics from perconnection to global metrics

Opening this p.r. for the connivence of discussion

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/aertoria/phoenix master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/287.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #287


commit 3362d62b0d133d86cfecd1b6af5cf0bbad8f0d44
Author: aertoria 
Date:   2017-12-17T21:33:56Z

PHOENIX-4370 Surface hbase metrics from perconnection to global metrics




> Surface hbase metrics from perconnection to global metrics
> --
>
> Key: PHOENIX-4370
> URL: https://issues.apache.org/jira/browse/PHOENIX-4370
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Ethan Wang
> Attachments: PHOENIX-4370-v1.patch
>
>
> Surface hbase metrics from perconnection to global metrics
> Currently in phoenix client side, HBASE metrics are recorded and surfaced at 
> Per Connection level. PHOENIX-4370 allow it to be aggregated at global level, 
> i.e., aggregate across all connections within in one JVM so that user can 
> evaluate it as a stable metrics periodically.
> COUNT_RPC_CALLS("rp", "Number of RPC calls"),
> COUNT_REMOTE_RPC_CALLS("rr", "Number of remote RPC calls"),
> COUNT_MILLS_BETWEEN_NEXTS("n", "Sum of milliseconds between sequential 
> next calls"),
> COUNT_NOT_SERVING_REGION_EXCEPTION("nsr", "Number of 
> NotServingRegionException caught"),
> COUNT_BYTES_REGION_SERVER_RESULTS("rs", "Number of bytes in Result 
> objects from region servers"),
> COUNT_BYTES_IN_REMOTE_RESULTS("rrs", "Number of bytes in Result objects 
> from remote region servers"),
> COUNT_SCANNED_REGIONS("rg", "Number of regions scanned"),
> COUNT_RPC_RETRIES("rpr", "Number of RPC retries"),
> COUNT_REMOTE_RPC_RETRIES("rrr", "Number of remote RPC retries"),
> COUNT_ROWS_SCANNED("ws", "Number of rows scanned"),
> COUNT_ROWS_FILTERED("wf", "Number of rows filtered");



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] phoenix pull request #287: PHOENIX-4370 Surface hbase metrics from perconnec...

2017-12-21 Thread aertoria
GitHub user aertoria opened a pull request:

https://github.com/apache/phoenix/pull/287

PHOENIX-4370 Surface hbase metrics from perconnection to global metrics

PHOENIX-4370 Surface hbase metrics from perconnection to global metrics

Opening this p.r. for the connivence of discussion

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/aertoria/phoenix master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/287.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #287


commit 3362d62b0d133d86cfecd1b6af5cf0bbad8f0d44
Author: aertoria 
Date:   2017-12-17T21:33:56Z

PHOENIX-4370 Surface hbase metrics from perconnection to global metrics




---


[GitHub] phoenix issue #262: PHOENIX 153 implement TABLESAMPLE clause

2017-12-21 Thread aertoria
Github user aertoria commented on the issue:

https://github.com/apache/phoenix/pull/262
  
Closing this P.R. as it has been merged.


---


[GitHub] phoenix pull request #262: PHOENIX 153 implement TABLESAMPLE clause

2017-12-21 Thread aertoria
Github user aertoria closed the pull request at:

https://github.com/apache/phoenix/pull/262


---


[jira] [Commented] (PHOENIX-4382) Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator byte return null in query results

2017-12-21 Thread Vincent Poon (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300515#comment-16300515
 ] 

Vincent Poon commented on PHOENIX-4382:
---

nevermind, I think the issue I came across only happens in tests.  What 
happened is that now I only serialize nulls if it's 
SORTABLE_SERIALIZATION_VERSION , but since my test does backwards-compat 
testing by testing decoding of IMMUTABLE_SERIALIZATION_VERSION (original v1) , 
then I should actually continue to serialize nulls for 
IMMUTABLE_SERIALIZATION_VERSION as well.

You can proceed with review, I'll post additional tests in a v3.

> Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator 
> byte return null in query results
> ---
>
> Key: PHOENIX-4382
> URL: https://issues.apache.org/jira/browse/PHOENIX-4382
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: PHOENIX-4382.v1.master.patch, 
> PHOENIX-4382.v2.master.patch, UpsertBigValuesIT.java
>
>
> For immutable tables, upsert of some values like Short.MAX_VALUE results in a 
> null value in query resultsets.  Mutable tables are not affected.  I tried 
> with BigInt and got the same problem.
> For Short, the breaking point seems to be 32512.
> This is happening because of the way we serialize nulls.  For nulls, we write 
> out [separatorByte, #_of_nulls].  However, some data values, like 
> Short.MAX_VALUE, start with separatorByte, we can't distinguish between a 
> null and these values.  Currently the code assumes it's a null when it sees a 
> leading separatorByte, hence the incorrect query results.
> See attached test - testShort() , testBigInt()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-4476) Range scan used for point lookups if filter is not in order of primary keys

2017-12-21 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva reassigned PHOENIX-4476:
---

Assignee: Karan Mehta  (was: Thomas D'Silva)

> Range scan used for point lookups if filter is not in order of primary keys
> ---
>
> Key: PHOENIX-4476
> URL: https://issues.apache.org/jira/browse/PHOENIX-4476
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.1
>Reporter: Mujtaba Chohan
>Assignee: Karan Mehta
>
> {noformat}
> DROP TABLE TEST;
> CREATE TABLE IF NOT EXISTS TEST (
> PK1 CHAR(1) NOT NULL,
> PK2 VARCHAR NOT NULL,
> PK3 VARCHAR NOT NULL,
> PK4 UNSIGNED_LONG NOT NULL,
> PK5 VARCHAR NOT NULL,
> V1 VARCHAR,
> V2 VARCHAR,
> V3 UNSIGNED_LONG
> CONSTRAINT state_pk PRIMARY KEY (
>   PK1,
>   PK2,
>   PK3,
>   PK4,
>   PK5
> )
> );
> // Incorrect explain plan with un-ordered PKs
> EXPLAIN SELECT V1 FROM TEST WHERE (PK1, PK5, PK2, PK3, PK4) IN (('A', 'E', 
> 'N', 'T', 3), ('A', 'Y', 'G', 'T', 4)); 
> +--+--+--+-+
> |   PLAN   |  EST_BYTES_READ  
> |  EST_ROWS_READ   | |
> +--+--+--+-+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER TEST ['A'] | null 
> | null   |
> | SERVER FILTER BY (PK1, PK5, PK2, PK3, PK4) IN 
> ([65,69,0,78,0,84,0,0,0,0,0,0,0,0,3],[65,89,0,71,0,84,0,0,0,0,0,0,0,0,4]) | 
> null   |
> +--+--+--+-+
> // Correct explain plan with PKs in order
> EXPLAIN SELECT V1 FROM TEST WHERE (PK1,PK2,PK3,PK4,PK5) IN (('A', 'E', 'N',3, 
> 'T'),('A', 'Y', 'G', 4, 'T')); 
> +--+--+--+-+
> |   PLAN   |  EST_BYTES_READ  
> |  EST_ROWS_READ   | |
> +--+--+--+-+
> | CLIENT 1-CHUNK 2 ROWS 712 BYTES PARALLEL 1-WAY ROUND ROBIN POINT LOOKUP ON 
> 2 KEYS OVER TEST | 712  | |
> +--+--+--+-+
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-4476) Range scan used for point lookups if filter is not in order of primary keys

2017-12-21 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva reassigned PHOENIX-4476:
---

Assignee: Thomas D'Silva

> Range scan used for point lookups if filter is not in order of primary keys
> ---
>
> Key: PHOENIX-4476
> URL: https://issues.apache.org/jira/browse/PHOENIX-4476
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.1
>Reporter: Mujtaba Chohan
>Assignee: Thomas D'Silva
>
> {noformat}
> DROP TABLE TEST;
> CREATE TABLE IF NOT EXISTS TEST (
> PK1 CHAR(1) NOT NULL,
> PK2 VARCHAR NOT NULL,
> PK3 VARCHAR NOT NULL,
> PK4 UNSIGNED_LONG NOT NULL,
> PK5 VARCHAR NOT NULL,
> V1 VARCHAR,
> V2 VARCHAR,
> V3 UNSIGNED_LONG
> CONSTRAINT state_pk PRIMARY KEY (
>   PK1,
>   PK2,
>   PK3,
>   PK4,
>   PK5
> )
> );
> // Incorrect explain plan with un-ordered PKs
> EXPLAIN SELECT V1 FROM TEST WHERE (PK1, PK5, PK2, PK3, PK4) IN (('A', 'E', 
> 'N', 'T', 3), ('A', 'Y', 'G', 'T', 4)); 
> +--+--+--+-+
> |   PLAN   |  EST_BYTES_READ  
> |  EST_ROWS_READ   | |
> +--+--+--+-+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER TEST ['A'] | null 
> | null   |
> | SERVER FILTER BY (PK1, PK5, PK2, PK3, PK4) IN 
> ([65,69,0,78,0,84,0,0,0,0,0,0,0,0,3],[65,89,0,71,0,84,0,0,0,0,0,0,0,0,4]) | 
> null   |
> +--+--+--+-+
> // Correct explain plan with PKs in order
> EXPLAIN SELECT V1 FROM TEST WHERE (PK1,PK2,PK3,PK4,PK5) IN (('A', 'E', 'N',3, 
> 'T'),('A', 'Y', 'G', 4, 'T')); 
> +--+--+--+-+
> |   PLAN   |  EST_BYTES_READ  
> |  EST_ROWS_READ   | |
> +--+--+--+-+
> | CLIENT 1-CHUNK 2 ROWS 712 BYTES PARALLEL 1-WAY ROUND ROBIN POINT LOOKUP ON 
> 2 KEYS OVER TEST | 712  | |
> +--+--+--+-+
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4382) Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator byte return null in query results

2017-12-21 Thread Vincent Poon (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300472#comment-16300472
 ] 

Vincent Poon commented on PHOENIX-4382:
---

Found a bug while testing separatorByte values.  Will put up a v3 after I fix 
it.

> Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator 
> byte return null in query results
> ---
>
> Key: PHOENIX-4382
> URL: https://issues.apache.org/jira/browse/PHOENIX-4382
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: PHOENIX-4382.v1.master.patch, 
> PHOENIX-4382.v2.master.patch, UpsertBigValuesIT.java
>
>
> For immutable tables, upsert of some values like Short.MAX_VALUE results in a 
> null value in query resultsets.  Mutable tables are not affected.  I tried 
> with BigInt and got the same problem.
> For Short, the breaking point seems to be 32512.
> This is happening because of the way we serialize nulls.  For nulls, we write 
> out [separatorByte, #_of_nulls].  However, some data values, like 
> Short.MAX_VALUE, start with separatorByte, we can't distinguish between a 
> null and these values.  Currently the code assumes it's a null when it sees a 
> leading separatorByte, hence the incorrect query results.
> See attached test - testShort() , testBigInt()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4382) Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator byte return null in query results

2017-12-21 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300445#comment-16300445
 ] 

Thomas D'Silva commented on PHOENIX-4382:
-

Sure I will review it soon.

> Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator 
> byte return null in query results
> ---
>
> Key: PHOENIX-4382
> URL: https://issues.apache.org/jira/browse/PHOENIX-4382
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: PHOENIX-4382.v1.master.patch, 
> PHOENIX-4382.v2.master.patch, UpsertBigValuesIT.java
>
>
> For immutable tables, upsert of some values like Short.MAX_VALUE results in a 
> null value in query resultsets.  Mutable tables are not affected.  I tried 
> with BigInt and got the same problem.
> For Short, the breaking point seems to be 32512.
> This is happening because of the way we serialize nulls.  For nulls, we write 
> out [separatorByte, #_of_nulls].  However, some data values, like 
> Short.MAX_VALUE, start with separatorByte, we can't distinguish between a 
> null and these values.  Currently the code assumes it's a null when it sees a 
> leading separatorByte, hence the incorrect query results.
> See attached test - testShort() , testBigInt()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4398) Change QueryCompiler get column expressions process from serial to parallel.

2017-12-21 Thread Albert Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Albert Lee updated PHOENIX-4398:

Attachment: PHOENIX-4398_V1.patch

Recommit PHOENIX-4398.patch as PHOENIX-4398_V1.patch. See what will happen.

> Change QueryCompiler get column expressions process from serial to parallel.
> 
>
> Key: PHOENIX-4398
> URL: https://issues.apache.org/jira/browse/PHOENIX-4398
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.11.0, 4.13.0
>Reporter: Albert Lee
> Fix For: 4.11.0, 4.13.0
>
> Attachments: PHOENIX-4398.patch, PHOENIX-4398_V1.patch
>
>
> When QueryCompiler compile a select sql, the process of getting column 
> expressions is a serial process. The performance is ok when the table is 
> narrow. But when compile a wide table(e.g. 130 columns in my use case), The 
> time-consuming of this step is very high, over 70ms. So I change 
> TupleProjector(PTable projectedTable) from serial for loop to parallel future.
> Because this is just modify code performance, not add new feture, so there is 
> no unit test.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4382) Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator byte return null in query results

2017-12-21 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300203#comment-16300203
 ] 

James Taylor commented on PHOENIX-4382:
---

bq. I can add tests for various values that start with separatorByte, to see if 
they get returned properly.
Sounds like you have good test coverage already, so probably no need to do more.

[~tdsilva] - would you mind reviewing too, please?

> Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator 
> byte return null in query results
> ---
>
> Key: PHOENIX-4382
> URL: https://issues.apache.org/jira/browse/PHOENIX-4382
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: PHOENIX-4382.v1.master.patch, 
> PHOENIX-4382.v2.master.patch, UpsertBigValuesIT.java
>
>
> For immutable tables, upsert of some values like Short.MAX_VALUE results in a 
> null value in query resultsets.  Mutable tables are not affected.  I tried 
> with BigInt and got the same problem.
> For Short, the breaking point seems to be 32512.
> This is happening because of the way we serialize nulls.  For nulls, we write 
> out [separatorByte, #_of_nulls].  However, some data values, like 
> Short.MAX_VALUE, start with separatorByte, we can't distinguish between a 
> null and these values.  Currently the code assumes it's a null when it sees a 
> leading separatorByte, hence the incorrect query results.
> See attached test - testShort() , testBigInt()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4382) Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator byte return null in query results

2017-12-21 Thread Vincent Poon (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300165#comment-16300165
 ] 

Vincent Poon commented on PHOENIX-4382:
---

In the test class in the patch, there are tests with two trailing nulls after a 
value, and two trailing nulls before a value.  There's also a test with 298 
nulls in between two values.

I can add tests for various values that start with separatorByte, to see if 
they get returned properly.

> Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator 
> byte return null in query results
> ---
>
> Key: PHOENIX-4382
> URL: https://issues.apache.org/jira/browse/PHOENIX-4382
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: PHOENIX-4382.v1.master.patch, 
> PHOENIX-4382.v2.master.patch, UpsertBigValuesIT.java
>
>
> For immutable tables, upsert of some values like Short.MAX_VALUE results in a 
> null value in query resultsets.  Mutable tables are not affected.  I tried 
> with BigInt and got the same problem.
> For Short, the breaking point seems to be 32512.
> This is happening because of the way we serialize nulls.  For nulls, we write 
> out [separatorByte, #_of_nulls].  However, some data values, like 
> Short.MAX_VALUE, start with separatorByte, we can't distinguish between a 
> null and these values.  Currently the code assumes it's a null when it sees a 
> leading separatorByte, hence the incorrect query results.
> See attached test - testShort() , testBigInt()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4484) Write directly to HBase when creating an index for transactional table

2017-12-21 Thread Ohad Shacham (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300114#comment-16300114
 ] 

Ohad Shacham commented on PHOENIX-4484:
---

[~giacomotaylor], [~tdsilva].

Guys, this is what we have discuss to write the data directly to HBase when 
creating an index for a non-empty table.

> Write directly to HBase when creating an index for transactional table
> --
>
> Key: PHOENIX-4484
> URL: https://issues.apache.org/jira/browse/PHOENIX-4484
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ohad Shacham
>Assignee: Ohad Shacham
>
> Today, when creating an index table for a non empty data table. The writes 
> are performed using the transaction api and both consumes client side memory, 
> for storing the writeset, and checks for conflict analysis upon commit. This 
> is redundant and can be replaced by direct write to HBase. For this reason, a 
> new function in the transaction abstraction layer should be added that writes 
> directly to HBase at the Tephra's case and adds shadow cells with the fence 
> id at the Omid case. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4484) Write directly to HBase when creating an index for transactional table

2017-12-21 Thread Ohad Shacham (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ohad Shacham updated PHOENIX-4484:
--
Issue Type: Sub-task  (was: Bug)
Parent: PHOENIX-3623

> Write directly to HBase when creating an index for transactional table
> --
>
> Key: PHOENIX-4484
> URL: https://issues.apache.org/jira/browse/PHOENIX-4484
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ohad Shacham
>Assignee: Ohad Shacham
>
> Today, when creating an index table for a non empty data table. The writes 
> are performed using the transaction api and both consumes client side memory, 
> for storing the writeset, and checks for conflict analysis upon commit. This 
> is redundant and can be replaced by direct write to HBase. For this reason, a 
> new function in the transaction abstraction layer should be added that writes 
> directly to HBase at the Tephra's case and adds shadow cells with the fence 
> id at the Omid case. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4484) Write directly to HBase when creating an index for transactional table

2017-12-21 Thread Ohad Shacham (JIRA)
Ohad Shacham created PHOENIX-4484:
-

 Summary: Write directly to HBase when creating an index for 
transactional table
 Key: PHOENIX-4484
 URL: https://issues.apache.org/jira/browse/PHOENIX-4484
 Project: Phoenix
  Issue Type: Bug
Reporter: Ohad Shacham


Today, when creating an index table for a non empty data table. The writes are 
performed using the transaction api and both consumes client side memory, for 
storing the writeset, and checks for conflict analysis upon commit. This is 
redundant and can be replaced by direct write to HBase. For this reason, a new 
function in the transaction abstraction layer should be added that writes 
directly to HBase at the Tephra's case and adds shadow cells with the fence id 
at the Omid case. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-4484) Write directly to HBase when creating an index for transactional table

2017-12-21 Thread Ohad Shacham (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ohad Shacham reassigned PHOENIX-4484:
-

Assignee: Ohad Shacham

> Write directly to HBase when creating an index for transactional table
> --
>
> Key: PHOENIX-4484
> URL: https://issues.apache.org/jira/browse/PHOENIX-4484
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ohad Shacham
>Assignee: Ohad Shacham
>
> Today, when creating an index table for a non empty data table. The writes 
> are performed using the transaction api and both consumes client side memory, 
> for storing the writeset, and checks for conflict analysis upon commit. This 
> is redundant and can be replaced by direct write to HBase. For this reason, a 
> new function in the transaction abstraction layer should be added that writes 
> directly to HBase at the Tephra's case and adds shadow cells with the fence 
> id at the Omid case. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4278) Implement pure client side transactional index maintenance

2017-12-21 Thread Ohad Shacham (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300099#comment-16300099
 ] 

Ohad Shacham commented on PHOENIX-4278:
---

Thanks [~giacomotaylor]. This seems like the right thing to do for the 
transactional case. I will do it and then we will get the shadow cells update 
in the index table for free.

> Implement pure client side transactional index maintenance
> --
>
> Key: PHOENIX-4278
> URL: https://issues.apache.org/jira/browse/PHOENIX-4278
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>
> The index maintenance for transactions follows the same model as non 
> transactional tables - coprocessor based on data table updates that looks up 
> previous row value to perform maintenance. This is necessary for non 
> transactional tables to ensure the rows are locked so that a consistent view 
> may be obtained. However, for transactional tables, the time stamp oracle 
> ensures uniqueness of time stamps (via transaction IDs) and the filtering 
> handles a scan seeing the "true" last committed value for a row. Thus, 
> there's no hard dependency to perform this on the server side.
> Moving the index maintenance to the client side would prevent any RS->RS RPC 
> calls (which have proved to be troublesome for HBase). It would require 
> returning more data to the client (i.e. the prior row value), but this seems 
> like a reasonable tradeoff.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-4278) Implement pure client side transactional index maintenance

2017-12-21 Thread Ohad Shacham (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ohad Shacham reassigned PHOENIX-4278:
-

Assignee: Ohad Shacham

> Implement pure client side transactional index maintenance
> --
>
> Key: PHOENIX-4278
> URL: https://issues.apache.org/jira/browse/PHOENIX-4278
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: Ohad Shacham
>
> The index maintenance for transactions follows the same model as non 
> transactional tables - coprocessor based on data table updates that looks up 
> previous row value to perform maintenance. This is necessary for non 
> transactional tables to ensure the rows are locked so that a consistent view 
> may be obtained. However, for transactional tables, the time stamp oracle 
> ensures uniqueness of time stamps (via transaction IDs) and the filtering 
> handles a scan seeing the "true" last committed value for a row. Thus, 
> there's no hard dependency to perform this on the server side.
> Moving the index maintenance to the client side would prevent any RS->RS RPC 
> calls (which have proved to be troublesome for HBase). It would require 
> returning more data to the client (i.e. the prior row value), but this seems 
> like a reasonable tradeoff.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-1661) Implement built-in functions for JSON

2017-12-21 Thread DanielSun (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DanielSun updated PHOENIX-1661:
---
Attachment: Implement built-in functions for JSON.pdf

> Implement built-in functions for JSON
> -
>
> Key: PHOENIX-1661
> URL: https://issues.apache.org/jira/browse/PHOENIX-1661
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: LeiWang
>  Labels: JSON, Java, SQL, gsoc2015, mentor
> Attachments: Implement built-in functions for JSON.pdf, 
> PHOENIX-1661.patch, PhoenixJSONSpecification-First-Draft.pdf
>
>
> Take a look at the JSON built-in functions that are implemented in Postgres 
> (http://www.postgresql.org/docs/9.3/static/functions-json.html) and implement 
> the same for Phoenix in Java following this guide: 
> http://phoenix-hbase.blogspot.com/2013/04/how-to-add-your-own-built-in-function.html
> Examples of functions include ARRAY_TO_JSON, ROW_TO_JSON, TO_JSON, etc. The 
> implementation of these built-in functions will be impacted by how JSON is 
> stored in Phoenix. See PHOENIX-628. An initial implementation could work off 
> of a simple text-based JSON representation and then when a native JSON type 
> is implemented, they could be reworked to be more efficient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-1661) Implement built-in functions for JSON

2017-12-21 Thread DanielSun (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300037#comment-16300037
 ] 

DanielSun commented on PHOENIX-1661:


We are very interested in this issue, and try to provide an idea, the following 
is our implement document. 
We are trying to write the program accroding to this document.
[^Implement built-in functions for JSON.pdf]

> Implement built-in functions for JSON
> -
>
> Key: PHOENIX-1661
> URL: https://issues.apache.org/jira/browse/PHOENIX-1661
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: LeiWang
>  Labels: JSON, Java, SQL, gsoc2015, mentor
> Attachments: Implement built-in functions for JSON.pdf, 
> PHOENIX-1661.patch, PhoenixJSONSpecification-First-Draft.pdf
>
>
> Take a look at the JSON built-in functions that are implemented in Postgres 
> (http://www.postgresql.org/docs/9.3/static/functions-json.html) and implement 
> the same for Phoenix in Java following this guide: 
> http://phoenix-hbase.blogspot.com/2013/04/how-to-add-your-own-built-in-function.html
> Examples of functions include ARRAY_TO_JSON, ROW_TO_JSON, TO_JSON, etc. The 
> implementation of these built-in functions will be impacted by how JSON is 
> stored in Phoenix. See PHOENIX-628. An initial implementation could work off 
> of a simple text-based JSON representation and then when a native JSON type 
> is implemented, they could be reworked to be more efficient.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (PHOENIX-4465) Default values of some of the table/column properties like max versions changed in HBase 2.0

2017-12-21 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved PHOENIX-4465.
--
Resolution: Fixed

Thanks for review [~an...@apache.org]. Committed.

> Default values of some of the table/column properties like max versions 
> changed in HBase 2.0
> 
>
> Key: PHOENIX-4465
> URL: https://issues.apache.org/jira/browse/PHOENIX-4465
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4465.patch, PHOENIX-4465_v2.patch
>
>
> There are some test case failure because of default values of table/column 
> properties like max versions changed we need to change the test cases 
> according to the new values. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4437) Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and pull optimize() out of getExplainPlan()

2017-12-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16299850#comment-16299850
 ] 

Hadoop QA commented on PHOENIX-4437:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12903178/PHOENIX-4437.patch
  against master branch at commit 9355a4d262d31d8d65e1467bcc351bb99760e11d.
  ATTACHMENT ID: 12903178

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+StatementPlan compilePlan = compilableStmt.compilePlan(stmt, 
Sequence.ValueOp.VALIDATE_SEQUENCE);
+// For a QueryPlan, we need to get its optimized plan; for a 
MutationPlan, its enclosed QueryPlan
+compilePlan = 
stmt.getConnection().getQueryServices().getOptimizer().optimize(stmt, dataPlan);

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1683//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1683//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1683//console

This message is automatically generated.

> Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and 
> pull optimize() out of getExplainPlan()
> 
>
> Key: PHOENIX-4437
> URL: https://issues.apache.org/jira/browse/PHOENIX-4437
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.11.0
>Reporter: Maryann Xue
>Assignee: Maryann Xue
> Attachments: PHOENIX-4437.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4483) Fix ImmutableIndexIT test failures when transactions enabled.

2017-12-21 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created PHOENIX-4483:


 Summary: Fix ImmutableIndexIT test failures when transactions 
enabled.
 Key: PHOENIX-4483
 URL: https://issues.apache.org/jira/browse/PHOENIX-4483
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 5.0.0


{noformat}
at 
org.apache.phoenix.end2end.index.ImmutableIndexIT.testDeleteFromNonPK(ImmutableIndexIT.java:228)

[ERROR] 
testDeleteFromPartialPK[ImmutableIndexIT_localIndex=false,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.ImmutableIndexIT)
  Time elapsed: 4.646 s  <<< FAILURE!
java.lang.AssertionError: expected:<3> but was:<0>
at 
org.apache.phoenix.end2end.index.ImmutableIndexIT.testDeleteFromPartialPK(ImmutableIndexIT.java:186)

[ERROR] 
testDropIfImmutableKeyValueColumn[ImmutableIndexIT_localIndex=false,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.ImmutableIndexIT)
  Time elapsed: 4.657 s  <<< FAILURE!
java.lang.AssertionError: expected:<3> but was:<0>
at 
org.apache.phoenix.end2end.index.ImmutableIndexIT.testDropIfImmutableKeyValueColumn(ImmutableIndexIT.java:140)

[ERROR] 
testDeleteFromNonPK[ImmutableIndexIT_localIndex=false,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.ImmutableIndexIT)
  Time elapsed: 4.658 s  <<< FAILURE!
java.lang.AssertionError: expected:<3> but was:<0>
at 
org.apache.phoenix.end2end.index.ImmutableIndexIT.testDeleteFromNonPK(ImmutableIndexIT.java:228)

[ERROR] 
testDeleteFromPartialPK[ImmutableIndexIT_localIndex=true,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.ImmutableIndexIT)
  Time elapsed: 9.694 s  <<< FAILURE!
java.lang.AssertionError: expected:<3> but was:<0>
at 
org.apache.phoenix.end2end.index.ImmutableIndexIT.testDeleteFromPartialPK(ImmutableIndexIT.java:186)

[ERROR] 
testDropIfImmutableKeyValueColumn[ImmutableIndexIT_localIndex=true,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.ImmutableIndexIT)
  Time elapsed: 9.812 s  <<< FAILURE!
java.lang.AssertionError: expected:<3> but was:<0>
at 
org.apache.phoenix.end2end.index.ImmutableIndexIT.testDropIfImmutableKeyValueColumn(ImmutableIndexIT.java:140)

[ERROR] 
testDeleteFromNonPK[ImmutableIndexIT_localIndex=true,transactional=true,columnEncoded=false](org.apache.phoenix.end2end.index.ImmutableIndexIT)
  Time elapsed: 11.728 s  <<< FAILURE!
java.lang.AssertionError: expected:<3> but was:<0>
at 
org.apache.phoenix.end2end.index.ImmutableIndexIT.testDeleteFromNonPK(ImmutableIndexIT.java:228)

[ERROR] 
testDeleteFromPartialPK[ImmutableIndexIT_localIndex=true,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.ImmutableIndexIT)
  Time elapsed: 9.683 s  <<< FAILURE!
java.lang.AssertionError: expected:<3> but was:<0>
at 
org.apache.phoenix.end2end.index.ImmutableIndexIT.testDeleteFromPartialPK(ImmutableIndexIT.java:186)

[ERROR] 
testDropIfImmutableKeyValueColumn[ImmutableIndexIT_localIndex=true,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.ImmutableIndexIT)
  Time elapsed: 9.965 s  <<< FAILURE!
java.lang.AssertionError: expected:<3> but was:<0>
at 
org.apache.phoenix.end2end.index.ImmutableIndexIT.testDropIfImmutableKeyValueColumn(ImmutableIndexIT.java:140)

[ERROR] 
testDeleteFromNonPK[ImmutableIndexIT_localIndex=true,transactional=true,columnEncoded=true](org.apache.phoenix.end2end.index.ImmutableIndexIT)
  Time elapsed: 9.665 s  <<< FAILURE!
java.lang.AssertionError: expected:<3> but was:<0>
at 
org.apache.phoenix.end2end.index.ImmutableIndexIT.testDeleteFromNonPK(ImmutableIndexIT.java:228)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4437) Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and pull optimize() out of getExplainPlan()

2017-12-21 Thread Maryann Xue (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maryann Xue updated PHOENIX-4437:
-
Attachment: PHOENIX-4437.patch

> Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and 
> pull optimize() out of getExplainPlan()
> 
>
> Key: PHOENIX-4437
> URL: https://issues.apache.org/jira/browse/PHOENIX-4437
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.11.0
>Reporter: Maryann Xue
>Assignee: Maryann Xue
> Attachments: PHOENIX-4437.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4437) Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and pull optimize() out of getExplainPlan()

2017-12-21 Thread Maryann Xue (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maryann Xue updated PHOENIX-4437:
-
Attachment: (was: PHOENIX-4437.patch)

> Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and 
> pull optimize() out of getExplainPlan()
> 
>
> Key: PHOENIX-4437
> URL: https://issues.apache.org/jira/browse/PHOENIX-4437
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.11.0
>Reporter: Maryann Xue
>Assignee: Maryann Xue
> Attachments: PHOENIX-4437.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)