[jira] [Assigned] (PHOENIX-5307) Fix HashJoinMoreIT.testBug2961 failed after PHOENIX-5262

2019-05-29 Thread chenglei (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei reassigned PHOENIX-5307:
-

Assignee: chenglei

> Fix HashJoinMoreIT.testBug2961 failed after PHOENIX-5262
> 
>
> Key: PHOENIX-5307
> URL: https://issues.apache.org/jira/browse/PHOENIX-5307
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Major
>
>  I noticed {{HashJoinMoreIT.testBug2961}} always failed after PHOENIX-5262, 
> which is different from this JIRA:
> {code}
> java.lang.AssertionError
>   at 
> org.apache.phoenix.end2end.join.HashJoinMoreIT.testBug2961(HashJoinMoreIT.java:908)
> {code}
> I think this problem is caused by following line 453 modified in PHOENIX-5262:
> {code}
> 445if ( !isFixedWidth && ( sepByte == 
> QueryConstants.DESC_SEPARATOR_BYTE 
> 446|| ( !exclusiveUpper 
> 447 && (fieldIndex < 
> schema.getMaxFields() || inclusiveUpper || exclusiveLower) ) ) ) {
> 448key[offset++] = sepByte;
> 449// Set lastInclusiveUpperSingleKey back to false if this 
> is the last pk column
> 450// as we don't want to increment the null byte in this case
> 451// To test if this is the last pk column we need to 
> consider the span of this slot
> 452// and the field index to see if this slot considers the 
> last column
> 453lastInclusiveUpperSingleKey &= (fieldIndex + slotSpan[i]) 
> < schema.getMaxFields()-1;
> 454   }
> {code}
> It did not consider the case that last field is variable length and also 
> DESC, when the last field is variable length and also {{DESC}}, the trailiing 
> 0xFF is not removed, so for such case, we should not set 
> {{lastInclusiveUpperSingleKey}} back to false.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5307) Fix HashJoinMoreIT.testBug2961 failed after PHOENIX-5262

2019-05-29 Thread chenglei (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-5307:
--
Description: 
 I noticed {{HashJoinMoreIT.testBug2961}} always failed after PHOENIX-5262, 
which is different from this JIRA:
{code}
java.lang.AssertionError
at 
org.apache.phoenix.end2end.join.HashJoinMoreIT.testBug2961(HashJoinMoreIT.java:908)
{code}

I think this problem is caused by following line 453 modified in PHOENIX-5262:
{code}
445if ( !isFixedWidth && ( sepByte == 
QueryConstants.DESC_SEPARATOR_BYTE 
446|| ( !exclusiveUpper 
447 && (fieldIndex < 
schema.getMaxFields() || inclusiveUpper || exclusiveLower) ) ) ) {
448key[offset++] = sepByte;
449// Set lastInclusiveUpperSingleKey back to false if this is 
the last pk column
450// as we don't want to increment the null byte in this case
451// To test if this is the last pk column we need to consider 
the span of this slot
452// and the field index to see if this slot considers the 
last column
453lastInclusiveUpperSingleKey &= (fieldIndex + slotSpan[i]) < 
schema.getMaxFields()-1;
454   }
{code}

It did not consider the case that last field is variable length and also DESC, 
when the last field is variable length and also {{DESC}}, the trailiing 0xFF is 
not removed, so for such case, we should not set 
{{lastInclusiveUpperSingleKey}} back to false.

  was:
 I noticed {{HashJoinMoreIT.testBug2961}} always failed after PHOENIX-5262, 
which is different from this JIRA:
{code}
java.lang.AssertionError
at 
org.apache.phoenix.end2end.join.HashJoinMoreIT.testBug2961(HashJoinMoreIT.java:908)
{code}

I think this problem is caused by following line 453 modified in PHOENIX-5262:
{code}
445if ( !isFixedWidth && ( sepByte == 
QueryConstants.DESC_SEPARATOR_BYTE 
446|| ( !exclusiveUpper 
447 && (fieldIndex < 
schema.getMaxFields() || inclusiveUpper || exclusiveLower) ) ) ) {
448key[offset++] = sepByte;
449// Set lastInclusiveUpperSingleKey back to false if this is 
the last pk column
450// as we don't want to increment the null byte in this case
451// To test if this is the last pk column we need to consider 
the span of this slot
452// and the field index to see if this slot considers the 
last column
453lastInclusiveUpperSingleKey &= (fieldIndex + slotSpan[i]) < 
schema.getMaxFields()-1;
454   }
{code}

It did not consider the case that last field is variable length and also DESC, 
when the last field is variable length and also {{DESC}}, the trailiing 0xFF is 
not removed, so for such field, we should not set 
{{lastInclusiveUpperSingleKey}} back to false.


> Fix HashJoinMoreIT.testBug2961 failed after PHOENIX-5262
> 
>
> Key: PHOENIX-5307
> URL: https://issues.apache.org/jira/browse/PHOENIX-5307
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: chenglei
>Priority: Major
>
>  I noticed {{HashJoinMoreIT.testBug2961}} always failed after PHOENIX-5262, 
> which is different from this JIRA:
> {code}
> java.lang.AssertionError
>   at 
> org.apache.phoenix.end2end.join.HashJoinMoreIT.testBug2961(HashJoinMoreIT.java:908)
> {code}
> I think this problem is caused by following line 453 modified in PHOENIX-5262:
> {code}
> 445if ( !isFixedWidth && ( sepByte == 
> QueryConstants.DESC_SEPARATOR_BYTE 
> 446|| ( !exclusiveUpper 
> 447 && (fieldIndex < 
> schema.getMaxFields() || inclusiveUpper || exclusiveLower) ) ) ) {
> 448key[offset++] = sepByte;
> 449// Set lastInclusiveUpperSingleKey back to false if this 
> is the last pk column
> 450// as we don't want to increment the null byte in this case
> 451// To test if this is the last pk column we need to 
> consider the span of this slot
> 452// and the field index to see if this slot considers the 
> last column
> 453lastInclusiveUpperSingleKey &= (fieldIndex + slotSpan[i]) 
> < schema.getMaxFields()-1;
> 454   }
> {code}
> It did not consider the case that last field is variable length and also 
> DESC, when the last field is variable length and also {{DESC}}, the trailiing 
> 0xFF is not removed, so for such case, we should not set 
> {{lastInclusiveUpperSingleKey}} back to false.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5307) Fix HashJoinMoreIT.testBug2961 failed after PHOENIX-5262

2019-05-29 Thread chenglei (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-5307:
--
Description: 
 I noticed {{HashJoinMoreIT.testBug2961}} always failed after PHOENIX-5262, 
which is different from this JIRA:
{code}
java.lang.AssertionError
at 
org.apache.phoenix.end2end.join.HashJoinMoreIT.testBug2961(HashJoinMoreIT.java:908)
{code}

I think this problem is caused by following line 453 modified in PHOENIX-5262:
{code}
445if ( !isFixedWidth && ( sepByte == 
QueryConstants.DESC_SEPARATOR_BYTE 
446|| ( !exclusiveUpper 
447 && (fieldIndex < 
schema.getMaxFields() || inclusiveUpper || exclusiveLower) ) ) ) {
448key[offset++] = sepByte;
449// Set lastInclusiveUpperSingleKey back to false if this is 
the last pk column
450// as we don't want to increment the null byte in this case
451// To test if this is the last pk column we need to consider 
the span of this slot
452// and the field index to see if this slot considers the 
last column
453lastInclusiveUpperSingleKey &= (fieldIndex + slotSpan[i]) < 
schema.getMaxFields()-1;
454   }
{code}

It did not consider the case that last field is variable length and also DESC, 
when the last field is variable length and also {{DESC}}, the trailiing 0xFF is 
not removed, so for such field, we should not set 
{{lastInclusiveUpperSingleKey}} back to false.

  was:
 I noticed {{HashJoinMoreIT.testBug2961}} always failed after PHOENIX-5262, 
which is different from this JIRA:
{code}
java.lang.AssertionError
at 
org.apache.phoenix.end2end.join.HashJoinMoreIT.testBug2961(HashJoinMoreIT.java:908)
{code}

I think this problem is caused by following line 453 modified in PHOENIX-5262:
{code}
445if ( !isFixedWidth && ( sepByte == 
QueryConstants.DESC_SEPARATOR_BYTE 
446|| ( !exclusiveUpper 
447 && (fieldIndex < 
schema.getMaxFields() || inclusiveUpper || exclusiveLower) ) ) ) {
448key[offset++] = sepByte;
449// Set lastInclusiveUpperSingleKey back to false if this is 
the last pk column
450// as we don't want to increment the null byte in this case
451// To test if this is the last pk column we need to consider 
the span of this slot
452// and the field index to see if this slot considers the 
last column
453lastInclusiveUpperSingleKey &= (fieldIndex + slotSpan[i]) < 
schema.getMaxFields()-1;
454   }
{code}

It did not consider the case that last field is variable length and also DESC, 
when the last field is variable length and also DESC, the trailiing 0xFF is not 
removed, so
for such field, we should not set lastInclusiveUpperSingleKey back to false.


> Fix HashJoinMoreIT.testBug2961 failed after PHOENIX-5262
> 
>
> Key: PHOENIX-5307
> URL: https://issues.apache.org/jira/browse/PHOENIX-5307
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: chenglei
>Priority: Major
>
>  I noticed {{HashJoinMoreIT.testBug2961}} always failed after PHOENIX-5262, 
> which is different from this JIRA:
> {code}
> java.lang.AssertionError
>   at 
> org.apache.phoenix.end2end.join.HashJoinMoreIT.testBug2961(HashJoinMoreIT.java:908)
> {code}
> I think this problem is caused by following line 453 modified in PHOENIX-5262:
> {code}
> 445if ( !isFixedWidth && ( sepByte == 
> QueryConstants.DESC_SEPARATOR_BYTE 
> 446|| ( !exclusiveUpper 
> 447 && (fieldIndex < 
> schema.getMaxFields() || inclusiveUpper || exclusiveLower) ) ) ) {
> 448key[offset++] = sepByte;
> 449// Set lastInclusiveUpperSingleKey back to false if this 
> is the last pk column
> 450// as we don't want to increment the null byte in this case
> 451// To test if this is the last pk column we need to 
> consider the span of this slot
> 452// and the field index to see if this slot considers the 
> last column
> 453lastInclusiveUpperSingleKey &= (fieldIndex + slotSpan[i]) 
> < schema.getMaxFields()-1;
> 454   }
> {code}
> It did not consider the case that last field is variable length and also 
> DESC, when the last field is variable length and also {{DESC}}, the trailiing 
> 0xFF is not removed, so for such field, we should not set 
> {{lastInclusiveUpperSingleKey}} back to false.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5307) Fix HashJoinMoreIT.testBug2961 failed after PHOENIX-5262

2019-05-29 Thread chenglei (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-5307:
--
Description: 
 I noticed {{HashJoinMoreIT.testBug2961}} always failed after PHOENIX-5262, 
which is different from this JIRA:
{code}
java.lang.AssertionError
at 
org.apache.phoenix.end2end.join.HashJoinMoreIT.testBug2961(HashJoinMoreIT.java:908)
{code}

I think this problem is caused by following line 453 modified in PHOENIX-5262:
{code}
445if ( !isFixedWidth && ( sepByte == 
QueryConstants.DESC_SEPARATOR_BYTE 
446|| ( !exclusiveUpper 
447 && (fieldIndex < 
schema.getMaxFields() || inclusiveUpper || exclusiveLower) ) ) ) {
448key[offset++] = sepByte;
449// Set lastInclusiveUpperSingleKey back to false if this is 
the last pk column
450// as we don't want to increment the null byte in this case
451// To test if this is the last pk column we need to consider 
the span of this slot
452// and the field index to see if this slot considers the 
last column
453lastInclusiveUpperSingleKey &= (fieldIndex + slotSpan[i]) < 
schema.getMaxFields()-1;
454   }
{code}

It did not consider the case that last field is variable length and also DESC, 
when the last field is variable length and also DESC, the trailiing 0xFF is not 
removed, so
for such field, we should not set lastInclusiveUpperSingleKey back to false.

  was:
 I noticed {{HashJoinMoreIT.testBug2961}} always failed after PHOENIX-5262, 
which is different from this JIRA:
{code}
java.lang.AssertionError
at 
org.apache.phoenix.end2end.join.HashJoinMoreIT.testBug2961(HashJoinMoreIT.java:908)
{code}

I think this problem is caused by following line 453 modified in PHOENIX-5262:
{code}
445if ( !isFixedWidth && ( sepByte == 
QueryConstants.DESC_SEPARATOR_BYTE 
446|| ( !exclusiveUpper 
447 && (fieldIndex < 
schema.getMaxFields() || inclusiveUpper || exclusiveLower) ) ) ) {
448key[offset++] = sepByte;
449// Set lastInclusiveUpperSingleKey back to false if this is 
the last pk column
450// as we don't want to increment the null byte in this case
451// To test if this is the last pk column we need to consider 
the span of this slot
452// and the field index to see if this slot considers the 
last column
453lastInclusiveUpperSingleKey &= (fieldIndex + slotSpan[i]) < 
schema.getMaxFields()-1;
454   }
{code}

It did not consider the case that last field is variable length and is DESC, 
when the last field is variable length and is DESC, the trailiing 0xFF is not 
removed, so
for such field, we should not set lastInclusiveUpperSingleKey back to false.


> Fix HashJoinMoreIT.testBug2961 failed after PHOENIX-5262
> 
>
> Key: PHOENIX-5307
> URL: https://issues.apache.org/jira/browse/PHOENIX-5307
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: chenglei
>Priority: Major
>
>  I noticed {{HashJoinMoreIT.testBug2961}} always failed after PHOENIX-5262, 
> which is different from this JIRA:
> {code}
> java.lang.AssertionError
>   at 
> org.apache.phoenix.end2end.join.HashJoinMoreIT.testBug2961(HashJoinMoreIT.java:908)
> {code}
> I think this problem is caused by following line 453 modified in PHOENIX-5262:
> {code}
> 445if ( !isFixedWidth && ( sepByte == 
> QueryConstants.DESC_SEPARATOR_BYTE 
> 446|| ( !exclusiveUpper 
> 447 && (fieldIndex < 
> schema.getMaxFields() || inclusiveUpper || exclusiveLower) ) ) ) {
> 448key[offset++] = sepByte;
> 449// Set lastInclusiveUpperSingleKey back to false if this 
> is the last pk column
> 450// as we don't want to increment the null byte in this case
> 451// To test if this is the last pk column we need to 
> consider the span of this slot
> 452// and the field index to see if this slot considers the 
> last column
> 453lastInclusiveUpperSingleKey &= (fieldIndex + slotSpan[i]) 
> < schema.getMaxFields()-1;
> 454   }
> {code}
> It did not consider the case that last field is variable length and also 
> DESC, when the last field is variable length and also DESC, the trailiing 
> 0xFF is not removed, so
> for such field, we should not set lastInclusiveUpperSingleKey back to false.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5307) Fix HashJoinMoreIT.testBug2961 failed after PHOENIX-5262

2019-05-29 Thread chenglei (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-5307:
--
Description: 
 I noticed {{HashJoinMoreIT.testBug2961}} always failed after PHOENIX-5262, 
which is different from this JIRA:
{code}
java.lang.AssertionError
at 
org.apache.phoenix.end2end.join.HashJoinMoreIT.testBug2961(HashJoinMoreIT.java:908)
{code}

I think this problem is caused by following line 453 modified in PHOENIX-5262:
{code}
445if ( !isFixedWidth && ( sepByte == 
QueryConstants.DESC_SEPARATOR_BYTE 
446|| ( !exclusiveUpper 
447 && (fieldIndex < 
schema.getMaxFields() || inclusiveUpper || exclusiveLower) ) ) ) {
448key[offset++] = sepByte;
449// Set lastInclusiveUpperSingleKey back to false if this is 
the last pk column
450// as we don't want to increment the null byte in this case
451// To test if this is the last pk column we need to consider 
the span of this slot
452// and the field index to see if this slot considers the 
last column
453lastInclusiveUpperSingleKey &= (fieldIndex + slotSpan[i]) < 
schema.getMaxFields()-1;
454   }
{code}

It did not consider the case that last field is variable length and is DESC, 
when the last field is variable length and is DESC, the trailiing 0xFF is not 
removed, so
for such field, we should not set lastInclusiveUpperSingleKey back to false.

> Fix HashJoinMoreIT.testBug2961 failed after PHOENIX-5262
> 
>
> Key: PHOENIX-5307
> URL: https://issues.apache.org/jira/browse/PHOENIX-5307
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: chenglei
>Priority: Major
>
>  I noticed {{HashJoinMoreIT.testBug2961}} always failed after PHOENIX-5262, 
> which is different from this JIRA:
> {code}
> java.lang.AssertionError
>   at 
> org.apache.phoenix.end2end.join.HashJoinMoreIT.testBug2961(HashJoinMoreIT.java:908)
> {code}
> I think this problem is caused by following line 453 modified in PHOENIX-5262:
> {code}
> 445if ( !isFixedWidth && ( sepByte == 
> QueryConstants.DESC_SEPARATOR_BYTE 
> 446|| ( !exclusiveUpper 
> 447 && (fieldIndex < 
> schema.getMaxFields() || inclusiveUpper || exclusiveLower) ) ) ) {
> 448key[offset++] = sepByte;
> 449// Set lastInclusiveUpperSingleKey back to false if this 
> is the last pk column
> 450// as we don't want to increment the null byte in this case
> 451// To test if this is the last pk column we need to 
> consider the span of this slot
> 452// and the field index to see if this slot considers the 
> last column
> 453lastInclusiveUpperSingleKey &= (fieldIndex + slotSpan[i]) 
> < schema.getMaxFields()-1;
> 454   }
> {code}
> It did not consider the case that last field is variable length and is DESC, 
> when the last field is variable length and is DESC, the trailiing 0xFF is not 
> removed, so
> for such field, we should not set lastInclusiveUpperSingleKey back to false.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5307) Fix HashJoinMoreIT.testBug2961 failed after PHOENIX-5262

2019-05-29 Thread chenglei (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-5307:
--
Summary: Fix HashJoinMoreIT.testBug2961 failed after PHOENIX-5262  (was: 
Fix HashJoinMoreIT.test failed after PHOENIX-5262)

> Fix HashJoinMoreIT.testBug2961 failed after PHOENIX-5262
> 
>
> Key: PHOENIX-5307
> URL: https://issues.apache.org/jira/browse/PHOENIX-5307
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: chenglei
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5307) Fix HashJoinMoreIT.test failed after PHOENIX-5262

2019-05-29 Thread chenglei (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-5307:
--
Summary: Fix HashJoinMoreIT.test failed after PHOENIX-5262  (was: Fix 
HashJoinMoreIT.test failed after PHOENIX-5232)

> Fix HashJoinMoreIT.test failed after PHOENIX-5262
> -
>
> Key: PHOENIX-5307
> URL: https://issues.apache.org/jira/browse/PHOENIX-5307
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: chenglei
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5307) Fix HashJoinMoreIT.test failed after PHOENIX-5232

2019-05-29 Thread chenglei (JIRA)
chenglei created PHOENIX-5307:
-

 Summary: Fix HashJoinMoreIT.test failed after PHOENIX-5232
 Key: PHOENIX-5307
 URL: https://issues.apache.org/jira/browse/PHOENIX-5307
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.15.0
Reporter: chenglei






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5292) fix MathTrigFunctionTest file compile error for branch 4.x-HBase-1.3/1.4/1.5

2019-05-29 Thread chenglei (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-5292:
--
Fix Version/s: 4.15.0

> fix MathTrigFunctionTest file compile error  for branch 4.x-HBase-1.3/1.4/1.5
> -
>
> Key: PHOENIX-5292
> URL: https://issues.apache.org/jira/browse/PHOENIX-5292
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.15.0
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Minor
> Fix For: 4.15.0
>
> Attachments: PHOENIX-5292-4.x-HBase-1.4.patch
>
>
> {{MathTrigFunctionTest }}causes compile failed on branch 4.x-HBase-1.3 and 
> 4.x-HBase-1.4, 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5292) fix MathTrigFunctionTest file compile error for branch 4.x-HBase-1.3/1.4/1.5

2019-05-29 Thread chenglei (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-5292:
--
Affects Version/s: 4.15.0

> fix MathTrigFunctionTest file compile error  for branch 4.x-HBase-1.3/1.4/1.5
> -
>
> Key: PHOENIX-5292
> URL: https://issues.apache.org/jira/browse/PHOENIX-5292
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.15.0
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Minor
> Attachments: PHOENIX-5292-4.x-HBase-1.4.patch
>
>
> {{MathTrigFunctionTest }}causes compile failed on branch 4.x-HBase-1.3 and 
> 4.x-HBase-1.4, 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5292) fix MathTrigFunctionTest file compile error for branch 4.x-HBase-1.3/1.4/1.5

2019-05-29 Thread chenglei (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-5292:
--
Summary: fix MathTrigFunctionTest file compile error  for branch 
4.x-HBase-1.3/1.4/1.5  (was: fix MathTrigFunctionTest file compile error )

> fix MathTrigFunctionTest file compile error  for branch 4.x-HBase-1.3/1.4/1.5
> -
>
> Key: PHOENIX-5292
> URL: https://issues.apache.org/jira/browse/PHOENIX-5292
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Minor
> Attachments: PHOENIX-5292-4.x-HBase-1.4.patch
>
>
> {{MathTrigFunctionTest }}causes compile failed on branch 4.x-HBase-1.3 and 
> 4.x-HBase-1.4, 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4893) Move parent column combining logic of view and view indexes from server to client

2019-05-29 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4893:

Fix Version/s: 5.1.0
   4.15.0

> Move parent column combining logic of view and view indexes from server to 
> client
> -
>
> Key: PHOENIX-4893
> URL: https://issues.apache.org/jira/browse/PHOENIX-4893
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
>
> In a stack trace I see that phoenix.coprocessor.ViewFinder.findRelatedViews() 
> scans SYSCAT. With splittable SYSCAT which now involves regions from other 
> servers.
> We should really push this to the client now.
> Related to HBASE-21166
> [~tdsilva]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5228) use slf4j for logging in phoenix project

2019-05-29 Thread Xinyi Yan (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-5228:
---
Attachment: PHOENIX-5228.patch

> use slf4j for logging in phoenix project
> 
>
> Key: PHOENIX-5228
> URL: https://issues.apache.org/jira/browse/PHOENIX-5228
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.1, 5.1.0
>Reporter: Mihir Monani
>Assignee: Xinyi Yan
>Priority: Trivial
>  Labels: SFDC
> Attachments: PHOENIX-5228-4.x-HBase-1.2.patch, 
> PHOENIX-5228-4.x-HBase-1.3.patch, PHOENIX-5228-4.x-HBase-1.4.patch, 
> PHOENIX-5228.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> It would be good to use slf4j for logging in phoenix project. Here is list of 
> files where doesn't use slf4j. 
> phoenix-core :-
> {noformat}
> WALRecoveryRegionPostOpenIT.java
> WALReplayWithIndexWritesAndCompressedWALIT.java
> BasePermissionsIT.java
> ChangePermissionsIT.java
> IndexRebuildIncrementDisableCountIT.java
> InvalidIndexStateClientSideIT.java
> MutableIndexReplicationIT.java
> FailForUnsupportedHBaseVersionsIT.java
> SecureUserConnectionsIT.java
> PhoenixMetricsIT.java
> BaseTracingTestIT.java
> PhoenixTracingEndToEndIT.java
> PhoenixRpcSchedulerFactory.java
> IndexHalfStoreFileReaderGenerator.java
> BinaryCompatibleBaseDecoder.java
> ServerCacheClient.java
> CallRunner.java
> MetaDataRegionObserver.java
> PhoenixAccessController.java
> ScanRegionObserver.java
> TaskRegionObserver.java
> DropChildViewsTask.java
> IndexRebuildTask.java
> BaseQueryPlan.java
> HashJoinPlan.java
> CollationKeyFunction.java
> Indexer.java
> LockManager.java
> BaseIndexBuilder.java
> IndexBuildManager.java
> NonTxIndexBuilder.java
> IndexMemStore.java
> BaseTaskRunner.java
> QuickFailingTaskRunner.java
> TaskBatch.java
> ThreadPoolBuilder.java
> ThreadPoolManager.java
> IndexManagementUtil.java
> IndexWriter.java
> IndexWriterUtils.java
> KillServerOnFailurePolicy.java
> ParallelWriterIndexCommitter.java
> RecoveryIndexWriter.java
> TrackingParallelWriterIndexCommitter.java
> PhoenixIndexFailurePolicy.java
> PhoenixTransactionalIndexer.java
> SnapshotScanner.java
> PhoenixEmbeddedDriver.java
> PhoenixResultSet.java
> QueryLogger.java
> QueryLoggerDisruptor.java
> TableLogWriter.java
> PhoenixInputFormat.java
> PhoenixOutputFormat.java
> PhoenixRecordReader.java
> PhoenixRecordWriter.java
> PhoenixServerBuildIndexInputFormat.java
> PhoenixMRJobSubmitter.java
> PhoenixConfigurationUtil.java
> Metrics.java
> DefaultStatisticsCollector.java
> StatisticsScanner.java
> PhoenixMetricsSink.java
> TraceReader.java
> TraceSpanReceiver.java
> TraceWriter.java
> Tracing.java
> EquiDepthStreamHistogram.java
> PhoenixMRJobUtil.java
> QueryUtil.java
> ServerUtil.java
> ZKBasedMasterElectionUtil.java
> IndexTestingUtils.java
> StubAbortable.java
> TestIndexWriter.java
> TestParalleIndexWriter.java
> TestParalleWriterIndexCommitter.java
> TestWALRecoveryCaching.java
> LoggingSink.java
> ParameterizedPhoenixCanaryToolIT.java
> CoprocessorHConnectionTableFactoryTest.java
> TestUtil.java{noformat}
> phoenix-tracing-webapp :-
> {noformat}
> org/apache/phoenix/tracingwebapp/http/Main.java
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5228) use slf4j for logging in phoenix project

2019-05-29 Thread Xinyi Yan (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-5228:
---
Attachment: (was: PHOENIX-5228.patch)

> use slf4j for logging in phoenix project
> 
>
> Key: PHOENIX-5228
> URL: https://issues.apache.org/jira/browse/PHOENIX-5228
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.1, 5.1.0
>Reporter: Mihir Monani
>Assignee: Xinyi Yan
>Priority: Trivial
>  Labels: SFDC
> Attachments: PHOENIX-5228-4.x-HBase-1.2.patch, 
> PHOENIX-5228-4.x-HBase-1.3.patch, PHOENIX-5228-4.x-HBase-1.4.patch, 
> PHOENIX-5228.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> It would be good to use slf4j for logging in phoenix project. Here is list of 
> files where doesn't use slf4j. 
> phoenix-core :-
> {noformat}
> WALRecoveryRegionPostOpenIT.java
> WALReplayWithIndexWritesAndCompressedWALIT.java
> BasePermissionsIT.java
> ChangePermissionsIT.java
> IndexRebuildIncrementDisableCountIT.java
> InvalidIndexStateClientSideIT.java
> MutableIndexReplicationIT.java
> FailForUnsupportedHBaseVersionsIT.java
> SecureUserConnectionsIT.java
> PhoenixMetricsIT.java
> BaseTracingTestIT.java
> PhoenixTracingEndToEndIT.java
> PhoenixRpcSchedulerFactory.java
> IndexHalfStoreFileReaderGenerator.java
> BinaryCompatibleBaseDecoder.java
> ServerCacheClient.java
> CallRunner.java
> MetaDataRegionObserver.java
> PhoenixAccessController.java
> ScanRegionObserver.java
> TaskRegionObserver.java
> DropChildViewsTask.java
> IndexRebuildTask.java
> BaseQueryPlan.java
> HashJoinPlan.java
> CollationKeyFunction.java
> Indexer.java
> LockManager.java
> BaseIndexBuilder.java
> IndexBuildManager.java
> NonTxIndexBuilder.java
> IndexMemStore.java
> BaseTaskRunner.java
> QuickFailingTaskRunner.java
> TaskBatch.java
> ThreadPoolBuilder.java
> ThreadPoolManager.java
> IndexManagementUtil.java
> IndexWriter.java
> IndexWriterUtils.java
> KillServerOnFailurePolicy.java
> ParallelWriterIndexCommitter.java
> RecoveryIndexWriter.java
> TrackingParallelWriterIndexCommitter.java
> PhoenixIndexFailurePolicy.java
> PhoenixTransactionalIndexer.java
> SnapshotScanner.java
> PhoenixEmbeddedDriver.java
> PhoenixResultSet.java
> QueryLogger.java
> QueryLoggerDisruptor.java
> TableLogWriter.java
> PhoenixInputFormat.java
> PhoenixOutputFormat.java
> PhoenixRecordReader.java
> PhoenixRecordWriter.java
> PhoenixServerBuildIndexInputFormat.java
> PhoenixMRJobSubmitter.java
> PhoenixConfigurationUtil.java
> Metrics.java
> DefaultStatisticsCollector.java
> StatisticsScanner.java
> PhoenixMetricsSink.java
> TraceReader.java
> TraceSpanReceiver.java
> TraceWriter.java
> Tracing.java
> EquiDepthStreamHistogram.java
> PhoenixMRJobUtil.java
> QueryUtil.java
> ServerUtil.java
> ZKBasedMasterElectionUtil.java
> IndexTestingUtils.java
> StubAbortable.java
> TestIndexWriter.java
> TestParalleIndexWriter.java
> TestParalleWriterIndexCommitter.java
> TestWALRecoveryCaching.java
> LoggingSink.java
> ParameterizedPhoenixCanaryToolIT.java
> CoprocessorHConnectionTableFactoryTest.java
> TestUtil.java{noformat}
> phoenix-tracing-webapp :-
> {noformat}
> org/apache/phoenix/tracingwebapp/http/Main.java
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5306) Misleading statement in document

2019-05-29 Thread Krishna Maheshwari (JIRA)
Krishna Maheshwari created PHOENIX-5306:
---

 Summary: Misleading statement in document
 Key: PHOENIX-5306
 URL: https://issues.apache.org/jira/browse/PHOENIX-5306
 Project: Phoenix
  Issue Type: Bug
Reporter: Krishna Maheshwari


[https://svn.apache.org/repos/asf/phoenix/site/source/src/site/markdown/views.md]
  has the following misleading statement as HBase scaling is not limited by 
number of tables but rather number of overall regions.

"The standard SQL view syntax (with some limitations) is now supported by 
Phoenix to enable multiple virtual tables to all share the same underlying 
physical HBase table. This is especially important in HBase, as you cannot 
realistically expect to have more than perhaps up to a hundred physical tables 
and continue to get reasonable performance from HBase."

This should be revised to state:

"The standard SQL view syntax (with some limitations) is now supported by 
Phoenix to enable multiple virtual tables to all share the same underlying 
physical HBase table."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5220) Create table fails when using the same connection after schema upgrade

2019-05-29 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-5220:
---

Assignee: (was: Swaroopa Kadam)

> Create table fails when using the same connection after schema upgrade
> --
>
> Key: PHOENIX-5220
> URL: https://issues.apache.org/jira/browse/PHOENIX-5220
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0, 4.14.1
>Reporter: Jacob Isaac
>Priority: Major
> Attachments: Screen Shot 2019-03-28 at 9.37.23 PM.png
>
>
> Steps:
> 1. Try to upgrade system.catalog from 4.10  to 4.13
> 2. Run Execute Upgrade
> 3. Creating a table will fail with the following exception -
> org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): 
> Undefined column. columnName=SYSTEM.CATALOG.USE_STATS_FOR_PARALLELIZATION
>   at 
> org.apache.phoenix.schema.PTableImpl.getColumnForColumnName(PTableImpl.java:828)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.resolveColumn(FromCompiler.java:475)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:450)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:755)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:741)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:389)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:377)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.access$700(PhoenixStatement.java:208)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:418)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:377)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:272)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2665)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1097)
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$1.execute(CreateTableCompiler.java:192)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:377)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.access$700(PhoenixStatement.java:208)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:418)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:377)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1775)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:807)
>   at sqlline.SqlLine.begin(SqlLine.java:681)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4871) Query parser throws exception on parameterized join

2019-05-29 Thread Miles Spielberg (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Miles Spielberg updated PHOENIX-4871:
-
Attachment: PHOENIX-4871.master.v1.patch

> Query parser throws exception on parameterized join
> ---
>
> Key: PHOENIX-4871
> URL: https://issues.apache.org/jira/browse/PHOENIX-4871
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
> Environment: This issue exists on version 4 and I could reproduce it 
> on current git repo version 
>Reporter: Mehdi Salarkia
>Priority: Major
> Attachments: PHOENIX-4871-repo.patch, PHOENIX-4871.master.v1.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When a join select statement has a parameter, Phoenix query parser fails to 
> create query metadata and fails this query :
> {code:java}
> SELECT "A"."a2" FROM "A" JOIN "B" ON ("A"."a1" = "B"."b1" ) WHERE "B"."b2" = 
> ? 
> {code}
> with the following exception: 
>  
> {code:java}
> org.apache.calcite.avatica.AvaticaSqlException: Error -1 (0) : while 
> preparing SQL: SELECT "A"."a2" FROM "A" JOIN "B" ON ("A"."a1" = "B"."b1") 
> WHERE ("B"."b2" = ?) 
> at org.apache.calcite.avatica.Helper.createException(Helper.java:54)
> at org.apache.calcite.avatica.Helper.createException(Helper.java:41)
> at 
> org.apache.calcite.avatica.AvaticaConnection.prepareStatement(AvaticaConnection.java:358)
> at 
> org.apache.calcite.avatica.AvaticaConnection.prepareStatement(AvaticaConnection.java:175)
> at 
> org.apache.phoenix.end2end.QueryServerBasicsIT.testParameterizedJoin(QueryServerBasicsIT.java:377)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
> at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
> at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> java.lang.RuntimeException: java.sql.SQLException: ERROR 2004 (INT05): 
> Parameter value unbound. Parameter at index 1 is unbound
> at org.apache.calcite.avatica.jdbc.JdbcMeta.propagate(JdbcMeta.java:700)
> at org.apache.calcite.avatica.jdbc.JdbcMeta.prepare(JdbcMeta.java:726)
> at org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:195)
> at 
> org.apache.calcite.avatica.remote.Service$PrepareRequest.accept(Service.java:1215)
> at 
> org.apache.calcite.avatica.remote.Service$PrepareRequest.accept(Service.java:1186)
> at 
> org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:94)
> at 
> org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
> at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:127)
> at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
> at 
> 

[jira] [Assigned] (PHOENIX-5211) Consistent Immutable Global Indexes for Non-Transactional Tables

2019-05-29 Thread Gokcen Iskender (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gokcen Iskender reassigned PHOENIX-5211:


Assignee: Gokcen Iskender  (was: Kadir OZDEMIR)

> Consistent Immutable Global Indexes for Non-Transactional Tables
> 
>
> Key: PHOENIX-5211
> URL: https://issues.apache.org/jira/browse/PHOENIX-5211
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0, 4.14.0, 5.0.0, 4.14.1
>Reporter: Kadir OZDEMIR
>Assignee: Gokcen Iskender
>Priority: Major
>
> Without transactional tables, the immutable global indexes can get easily out 
> of sync with their data tables in Phoenix. Transactional tables require a 
> separate transaction manager, have some restrictions and performance 
> penalties. This issue is to have consistent immutable global indexes without 
> the need for using transactional tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5272) Support ALTER INDEX REBUILD ALL ASYNC to fully rebuild global indexes async

2019-05-29 Thread Gokcen Iskender (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gokcen Iskender updated PHOENIX-5272:
-
Attachment: (was: PHOENIX-5272.patch)

> Support ALTER INDEX REBUILD ALL ASYNC to fully rebuild global indexes async
> ---
>
> Key: PHOENIX-5272
> URL: https://issues.apache.org/jira/browse/PHOENIX-5272
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5272-4.x.patch, PHOENIX-5272.patch
>
>
> This Jira is part of PHOENIX-4703. On 4703, a flag to IndexTool is added so 
> that we can delete all global indexes before rebuilding. This Jira is 
> concentrated on ALTER INDEX REBUILD ALL sql syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5272) Support ALTER INDEX REBUILD ALL ASYNC to fully rebuild global indexes async

2019-05-29 Thread Gokcen Iskender (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gokcen Iskender updated PHOENIX-5272:
-
Attachment: PHOENIX-5272.patch

> Support ALTER INDEX REBUILD ALL ASYNC to fully rebuild global indexes async
> ---
>
> Key: PHOENIX-5272
> URL: https://issues.apache.org/jira/browse/PHOENIX-5272
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5272-4.x.patch, PHOENIX-5272.patch
>
>
> This Jira is part of PHOENIX-4703. On 4703, a flag to IndexTool is added so 
> that we can delete all global indexes before rebuilding. This Jira is 
> concentrated on ALTER INDEX REBUILD ALL sql syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5231) Configurable Stats Cache

2019-05-29 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon resolved PHOENIX-5231.
---
Resolution: Fixed

Pushed addendum to 4.x and master branches

> Configurable Stats Cache
> 
>
> Key: PHOENIX-5231
> URL: https://issues.apache.org/jira/browse/PHOENIX-5231
> Project: Phoenix
>  Issue Type: Test
>Reporter: Daniel Wong
>Assignee: Daniel Wong
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: 5231-quickfix-v2.txt, 5231-quickfix.txt, 
> 5231-services-fix.patch, PHOENIX-5231.4.x-HBase-1.3.patch, 
> PHOENIX-5231.4.x-HBase-1.3.v2.patch, PHOENIX-5231.4.x-HBase-1.3.v3.patch, 
> PHOENIX-5231.master.v3.patch, PHOENIX-5231.master.v4.patch
>
>  Time Spent: 8h 40m
>  Remaining Estimate: 0h
>
> Currently, the phoenix stats cache is per 
> ConnectionQuerySerivce/ConnectionProfile, which leads to duplicated cached 
> entry (the guideposts) and waste resources if these separate connections are 
> querying the same underlying table. It would be good to be able to provide a 
> configurable stats cache as control the cache level so it could be per JVM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4273) MutableIndexSplitIT#testSplitDuringIndexScan is failing for local indexes

2019-05-29 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-4273:
---
Attachment: 4273-fix-4.x.txt

> MutableIndexSplitIT#testSplitDuringIndexScan is failing for local indexes
> -
>
> Key: PHOENIX-4273
> URL: https://issues.apache.org/jira/browse/PHOENIX-4273
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: 4273-fix-4.x.txt, PHOENIX-4273.patch
>
>
> Relevant stacktrace:
> {code}
> 2017-10-03 14:04:54,315 DEBUG 
> [RpcServer.FifoWFPBQ.default.handler=4,queue=0,port=61742] 
> org.apache.hadoop.hbase.ipc.CallRunner(126): 
> RpcServer.FifoWFPBQ.default.handler=4,queue=0,port=61742: callId: 2641 
> service: ClientService methodName: Scan size: 32 connection: 10.0.1.43:61774
> org.apache.hadoop.hbase.NotServingRegionException: Region was re-opened after 
> the scanner42 was created: 
> IDX_T02,,1507064585384.adfef66e27a6688c5fcfe70d230248d8.
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegionScanner(RSRpcServices.java:2465)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2750)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
> 2017-10-03 14:04:54,315 WARN  [main] 
> org.apache.hadoop.hbase.client.ScannerCallable(392): Ignore, probably already 
> closed
> org.apache.hadoop.hbase.NotServingRegionException: 
> org.apache.hadoop.hbase.NotServingRegionException: Region was re-opened after 
> the scanner42 was created: 
> IDX_T02,,1507064585384.adfef66e27a6688c5fcfe70d230248d8.
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegionScanner(RSRpcServices.java:2465)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2750)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:332)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:389)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:207)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:145)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:212)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:314)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.close(ClientScanner.java:766)
>   at 
> org.apache.phoenix.iterate.ScanningResultIterator.close(ScanningResultIterator.java:69)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.close(TableResultIterator.java:145)
>   at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.close(LookAheadResultIterator.java:42)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.close(BaseResultIterators.java:1091)
>   at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.close(RoundRobinResultIterator.java:125)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.close(PhoenixResultSet.java:163)
>   at 
> 

[jira] [Updated] (PHOENIX-5272) Support ALTER INDEX REBUILD ALL ASYNC to fully rebuild global indexes async

2019-05-29 Thread Gokcen Iskender (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gokcen Iskender updated PHOENIX-5272:
-
Attachment: PHOENIX-5272.patch

> Support ALTER INDEX REBUILD ALL ASYNC to fully rebuild global indexes async
> ---
>
> Key: PHOENIX-5272
> URL: https://issues.apache.org/jira/browse/PHOENIX-5272
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5272-4.x.patch, PHOENIX-5272.patch
>
>
> This Jira is part of PHOENIX-4703. On 4703, a flag to IndexTool is added so 
> that we can delete all global indexes before rebuilding. This Jira is 
> concentrated on ALTER INDEX REBUILD ALL sql syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5272) Support ALTER INDEX REBUILD ALL ASYNC to fully rebuild global indexes async

2019-05-29 Thread Gokcen Iskender (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gokcen Iskender updated PHOENIX-5272:
-
Attachment: PHOENIX-5272-4.x.patch

> Support ALTER INDEX REBUILD ALL ASYNC to fully rebuild global indexes async
> ---
>
> Key: PHOENIX-5272
> URL: https://issues.apache.org/jira/browse/PHOENIX-5272
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5272-4.x.patch
>
>
> This Jira is part of PHOENIX-4703. On 4703, a flag to IndexTool is added so 
> that we can delete all global indexes before rebuilding. This Jira is 
> concentrated on ALTER INDEX REBUILD ALL sql syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5272) Support ALTER INDEX REBUILD ALL ASYNC to fully rebuild global indexes async

2019-05-29 Thread Gokcen Iskender (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gokcen Iskender updated PHOENIX-5272:
-
Attachment: (was: PHOENIX-5272-4.x.patch)

> Support ALTER INDEX REBUILD ALL ASYNC to fully rebuild global indexes async
> ---
>
> Key: PHOENIX-5272
> URL: https://issues.apache.org/jira/browse/PHOENIX-5272
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5272-4.x.patch
>
>
> This Jira is part of PHOENIX-4703. On 4703, a flag to IndexTool is added so 
> that we can delete all global indexes before rebuilding. This Jira is 
> concentrated on ALTER INDEX REBUILD ALL sql syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5272) Support ALTER INDEX REBUILD ALL ASYNC to fully rebuild global indexes async

2019-05-29 Thread Gokcen Iskender (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gokcen Iskender updated PHOENIX-5272:
-
Attachment: (was: PHOENIX-5272.patch)

> Support ALTER INDEX REBUILD ALL ASYNC to fully rebuild global indexes async
> ---
>
> Key: PHOENIX-5272
> URL: https://issues.apache.org/jira/browse/PHOENIX-5272
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5272-4.x.patch
>
>
> This Jira is part of PHOENIX-4703. On 4703, a flag to IndexTool is added so 
> that we can delete all global indexes before rebuilding. This Jira is 
> concentrated on ALTER INDEX REBUILD ALL sql syntax.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4273) MutableIndexSplitIT#testSplitDuringIndexScan is failing for local indexes

2019-05-29 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reassigned PHOENIX-4273:
--

Assignee: Lars Hofhansl  (was: Rajeshbabu Chintaguntla)

> MutableIndexSplitIT#testSplitDuringIndexScan is failing for local indexes
> -
>
> Key: PHOENIX-4273
> URL: https://issues.apache.org/jira/browse/PHOENIX-4273
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Lars Hofhansl
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4273.patch
>
>
> Relevant stacktrace:
> {code}
> 2017-10-03 14:04:54,315 DEBUG 
> [RpcServer.FifoWFPBQ.default.handler=4,queue=0,port=61742] 
> org.apache.hadoop.hbase.ipc.CallRunner(126): 
> RpcServer.FifoWFPBQ.default.handler=4,queue=0,port=61742: callId: 2641 
> service: ClientService methodName: Scan size: 32 connection: 10.0.1.43:61774
> org.apache.hadoop.hbase.NotServingRegionException: Region was re-opened after 
> the scanner42 was created: 
> IDX_T02,,1507064585384.adfef66e27a6688c5fcfe70d230248d8.
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegionScanner(RSRpcServices.java:2465)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2750)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
> 2017-10-03 14:04:54,315 WARN  [main] 
> org.apache.hadoop.hbase.client.ScannerCallable(392): Ignore, probably already 
> closed
> org.apache.hadoop.hbase.NotServingRegionException: 
> org.apache.hadoop.hbase.NotServingRegionException: Region was re-opened after 
> the scanner42 was created: 
> IDX_T02,,1507064585384.adfef66e27a6688c5fcfe70d230248d8.
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegionScanner(RSRpcServices.java:2465)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2750)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:332)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:389)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:207)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:145)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:212)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:314)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.close(ClientScanner.java:766)
>   at 
> org.apache.phoenix.iterate.ScanningResultIterator.close(ScanningResultIterator.java:69)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.close(TableResultIterator.java:145)
>   at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.close(LookAheadResultIterator.java:42)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.close(BaseResultIterators.java:1091)
>   at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.close(RoundRobinResultIterator.java:125)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.close(PhoenixResultSet.java:163)
>   at 
> 

[jira] [Updated] (PHOENIX-4273) MutableIndexSplitIT#testSplitDuringIndexScan is failing for local indexes

2019-05-29 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-4273:
---
Fix Version/s: 5.1.0
   4.15.0

> MutableIndexSplitIT#testSplitDuringIndexScan is failing for local indexes
> -
>
> Key: PHOENIX-4273
> URL: https://issues.apache.org/jira/browse/PHOENIX-4273
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4273.patch
>
>
> Relevant stacktrace:
> {code}
> 2017-10-03 14:04:54,315 DEBUG 
> [RpcServer.FifoWFPBQ.default.handler=4,queue=0,port=61742] 
> org.apache.hadoop.hbase.ipc.CallRunner(126): 
> RpcServer.FifoWFPBQ.default.handler=4,queue=0,port=61742: callId: 2641 
> service: ClientService methodName: Scan size: 32 connection: 10.0.1.43:61774
> org.apache.hadoop.hbase.NotServingRegionException: Region was re-opened after 
> the scanner42 was created: 
> IDX_T02,,1507064585384.adfef66e27a6688c5fcfe70d230248d8.
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegionScanner(RSRpcServices.java:2465)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2750)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
> 2017-10-03 14:04:54,315 WARN  [main] 
> org.apache.hadoop.hbase.client.ScannerCallable(392): Ignore, probably already 
> closed
> org.apache.hadoop.hbase.NotServingRegionException: 
> org.apache.hadoop.hbase.NotServingRegionException: Region was re-opened after 
> the scanner42 was created: 
> IDX_T02,,1507064585384.adfef66e27a6688c5fcfe70d230248d8.
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegionScanner(RSRpcServices.java:2465)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2750)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>   at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
>   at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:332)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:389)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:207)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:145)
>   at 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:212)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:314)
>   at 
> org.apache.hadoop.hbase.client.ClientScanner.close(ClientScanner.java:766)
>   at 
> org.apache.phoenix.iterate.ScanningResultIterator.close(ScanningResultIterator.java:69)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.close(TableResultIterator.java:145)
>   at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.close(LookAheadResultIterator.java:42)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.close(BaseResultIterators.java:1091)
>   at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.close(RoundRobinResultIterator.java:125)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.close(PhoenixResultSet.java:163)
>   at 
> 

[jira] [Resolved] (PHOENIX-5115) MutableIndexSplitForwardScanIT.testSplitDuringIndexScan[MutableIndexSplitIT_localIndex=true,...] is failing

2019-05-29 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved PHOENIX-5115.

Resolution: Duplicate

Closing as dup of PHOENIX-4273

> MutableIndexSplitForwardScanIT.testSplitDuringIndexScan[MutableIndexSplitIT_localIndex=true,...]
>  is failing
> ---
>
> Key: PHOENIX-5115
> URL: https://issues.apache.org/jira/browse/PHOENIX-5115
> Project: Phoenix
>  Issue Type: Task
>Reporter: Lars Hofhansl
>Priority: Major
> Attachments: 5115.txt
>
>
> I noticed that when testing for PHOENIX-5112.
> Looks like each time we split during a scan we'll get an exception to the 
> client.
> With the problems mentioned in PHOENIX-4849 in mind, I think this is 
> expected, but I wanted to track this here. Specifically there are other 
> scenarios (such as aggregates) where restart upon split cannot be supported.
> {code}
> 2019-01-30 10:28:23,955 DEBUG [phoenix-21-thread-98] 
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas(200): Scan with 
> primary region returns org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 1108 (XCL08): Cache of 
> region boundaries are out of date. tableName=TBL_N000775
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.throwIfScanOutOfRegion(BaseScannerRegionObserver.java:163)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.preScannerOpen(BaseScannerRegionObserver.java:191)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$43.call(RegionCoprocessorHost.java:1352)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$43.call(RegionCoprocessorHost.java:1349)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:540)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:614)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerOpen(RegionCoprocessorHost.java:1349)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:2965)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:3272)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42002)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 
> 1108 (XCL08): Cache of region boundaries are out of date. 
> tableName=TBL_N000775
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.throwIfScanOutOfRegion(BaseScannerRegionObserver.java:162)
>   ... 13 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5295) Local Index data not replicating for older HBase versions (<= HBase 1.2)

2019-05-29 Thread Hieu Nguyen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hieu Nguyen updated PHOENIX-5295:
-
Attachment: PHOENIX-5295.4.14-cdh5.11.v3.patch

> Local Index data not replicating for older HBase versions (<= HBase 1.2)
> 
>
> Key: PHOENIX-5295
> URL: https://issues.apache.org/jira/browse/PHOENIX-5295
> Project: Phoenix
>  Issue Type: Bug
> Environment: Branch 4.14-cdh5.11
>Reporter: Hieu Nguyen
>Priority: Major
> Attachments: PHOENIX-5295.4.14-cdh5.11.v1.patch, 
> PHOENIX-5295.4.14-cdh5.11.v1.patch, PHOENIX-5295.4.14-cdh5.11.v2.patch, 
> PHOENIX-5295.4.14-cdh5.11.v3.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Copied from email thread 
> https://lists.apache.org/thread.html/7ab1d9489eca2ab2b974948fbe60b143fda432ef7dfc603528d460f2@%3Cuser.phoenix.apache.org%3E.
> ---
> We are on Phoenix 4.14-cdh5.11.  We are experiencing an issue where local 
> index data is not being replicated through HBase replication.  As suggested 
> in a previous email thread 
> (https://lists.apache.org/thread.html/984fba3c8abd944846deefb3ea285195e0436b9181b9779feac39b59@%3Cuser.phoenix.apache.org%3E),
>  we have enabled replication for the local indexes (the "L#0" column family 
> on the same table).  We wrote an integration test to demonstrate this issue 
> on top of 4.14-cdh5.11 branch 
> (https://github.com/hnguyen08/phoenix/commit/3589cb45d941c6909fb3deb5f5abb0f8dfa78dd7).
> After some investigation and debugging, we discovered the following:
> 1. Commit a2f4d7eebec621b58204a9eb78d552f18dcbcf24 (PHOENIX-3827) fixed the 
> issue, but only in Phoenix for HBase1.3+.  It uses the 
> miniBatchOp.addOperationsFromCP() API introduced in HBase1.3.  Unfortunately, 
> for the time being, we are stuck on cdh5.11 (based on HBase1.2).
> 2. IndexUtil.writeLocalUpdates() is called in both implementations of 
> IndexCommitter, both taking skipWAL=true.  It seems like we'd actually want 
> to not skip WAL to ensure that local-index updates are replicated correctly 
> (since, as mentioned in the above email thread, "HBase-level replication of 
> the data table will not trigger index updates").  After changing the skipWAL 
> flag to false, the above integration test passes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5305) Move expression function tests to the right folder

2019-05-29 Thread Xinyi Yan (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-5305:
---
Attachment: PHOENIX-5305-4.x-HBase-1.3.patch

> Move expression function tests to the right folder
> --
>
> Key: PHOENIX-5305
> URL: https://issues.apache.org/jira/browse/PHOENIX-5305
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Trivial
> Attachments: PHOENIX-5305-4.x-HBase-1.3.patch, 
> PHOENIX-5305-4.x-HBase-1.4.patch, PHOENIX-5305-4.x-HBase-1.5.patch, 
> PHOENIX-5305.patch
>
>
> As [~chenglei] discovered on the other Jira, many phoenix expression function 
> tests are not under the right folder. Put function tests under the package 
> `{{org.apache.phoenix.expression.function}} ` instead of 
> `{{org.apache.phoenix.expression}}` for better code quality.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5305) Move expression function tests to the right folder

2019-05-29 Thread Xinyi Yan (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-5305:
---
Attachment: PHOENIX-5305-4.x-HBase-1.5.patch

> Move expression function tests to the right folder
> --
>
> Key: PHOENIX-5305
> URL: https://issues.apache.org/jira/browse/PHOENIX-5305
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Trivial
> Attachments: PHOENIX-5305-4.x-HBase-1.3.patch, 
> PHOENIX-5305-4.x-HBase-1.4.patch, PHOENIX-5305-4.x-HBase-1.5.patch, 
> PHOENIX-5305.patch
>
>
> As [~chenglei] discovered on the other Jira, many phoenix expression function 
> tests are not under the right folder. Put function tests under the package 
> `{{org.apache.phoenix.expression.function}} ` instead of 
> `{{org.apache.phoenix.expression}}` for better code quality.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5305) Move expression function tests to the right folder

2019-05-29 Thread Xinyi Yan (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-5305:
---
Attachment: PHOENIX-5305.patch

> Move expression function tests to the right folder
> --
>
> Key: PHOENIX-5305
> URL: https://issues.apache.org/jira/browse/PHOENIX-5305
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Trivial
> Attachments: PHOENIX-5305-4.x-HBase-1.3.patch, 
> PHOENIX-5305-4.x-HBase-1.4.patch, PHOENIX-5305-4.x-HBase-1.5.patch, 
> PHOENIX-5305.patch
>
>
> As [~chenglei] discovered on the other Jira, many phoenix expression function 
> tests are not under the right folder. Put function tests under the package 
> `{{org.apache.phoenix.expression.function}} ` instead of 
> `{{org.apache.phoenix.expression}}` for better code quality.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [ANNOUNCE] New committer Swaroopa Kadam

2019-05-29 Thread swaroopa kadam
Thank you everyone! Excited for my further contribution in the community. 

On Tue, May 28, 2019 at 4:56 PM Vincent Poon  wrote:

> Congrats Swaroopa, looking forward to more patches
>
> On Tue, May 28, 2019 at 4:33 PM Xu Cang 
> wrote:
>
> > Congrats! :)
> >
> > On Tue, May 28, 2019 at 4:18 PM Priyank Porwal 
> > wrote:
> >
> > > Congrats Swaroopa!
> > >
> > > On Tue, May 28, 2019, 3:24 PM Andrew Purtell 
> > wrote:
> > >
> > > > Congratulations Swaroopa!
> > > >
> > > > On Tue, May 28, 2019 at 2:38 PM Geoffrey Jacoby 
> > > > wrote:
> > > >
> > > > > On behalf of the Apache Phoenix PMC, I am pleased to announce that
> > > > Swaroopa
> > > > > Kadam has accepted our invitation to become a Phoenix committer.
> > > Swaroopa
> > > > > has contributed to a number of areas in the project, including the
> > > query
> > > > > server[1] and been an active participant in many code reviews for
> > > others'
> > > > > patches.
> > > > >
> > > > > Congratulations, Swaroopa, and we look forward to many more great
> > > > > contributions from you!
> > > > >
> > > > > Geoffrey Jacoby
> > > > >
> > > > > [1] -
> > > > >
> > > > >
> > > >
> > >
> >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20PHOENIX%20AND%20status%20%3D%20Resolved%20AND%20assignee%20in%20(swaroopa)
> > > > >
> > > >
> > > >
> > > > --
> > > > Best regards,
> > > > Andrew
> > > >
> > > > Words like orphans lost among the crosstalk, meaning torn from
> truth's
> > > > decrepit hands
> > > >- A23, Crosstalk
> > > >
> > >
> >
>
-- 


Swaroopa Kadam
[image: https://]about.me/swaroopa_kadam