[jira] [Updated] (PHOENIX-5307) Fix HashJoinMoreIT.testBug2961 failing after PHOENIX-5262

2019-05-31 Thread chenglei (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei updated PHOENIX-5307:
--
Description: 
 I noticed {{HashJoinMoreIT.testBug2961}} always failed after PHOENIX-5262, and 
the error stack is different from PHOENIX-5290:
{code}
java.lang.AssertionError
at 
org.apache.phoenix.end2end.join.HashJoinMoreIT.testBug2961(HashJoinMoreIT.java:908)
{code}

I think this problem is caused by following line 453 modified in PHOENIX-5262:
{code}
445if ( !isFixedWidth && ( sepByte == 
QueryConstants.DESC_SEPARATOR_BYTE 
446|| ( !exclusiveUpper 
447 && (fieldIndex < 
schema.getMaxFields() || inclusiveUpper || exclusiveLower) ) ) ) {
448key[offset++] = sepByte;
449// Set lastInclusiveUpperSingleKey back to false if this is 
the last pk column
450// as we don't want to increment the null byte in this case
451// To test if this is the last pk column we need to consider 
the span of this slot
452// and the field index to see if this slot considers the 
last column
453lastInclusiveUpperSingleKey &= (fieldIndex + slotSpan[i]) < 
schema.getMaxFields()-1;
454   }
{code}

It did not consider the case that the last field is variable length and also 
DESC, when the last field is variable length and also {{DESC}}, the trailiing 
{{0xFF}} is not removed when stored in HBASE, so for such case, we should not 
set {{lastInclusiveUpperSingleKey}} back to false.

  was:
 I noticed {{HashJoinMoreIT.testBug2961}} always failed after PHOENIX-5262, 
which is different from PHOENIX-5290:
{code}
java.lang.AssertionError
at 
org.apache.phoenix.end2end.join.HashJoinMoreIT.testBug2961(HashJoinMoreIT.java:908)
{code}

I think this problem is caused by following line 453 modified in PHOENIX-5262:
{code}
445if ( !isFixedWidth && ( sepByte == 
QueryConstants.DESC_SEPARATOR_BYTE 
446|| ( !exclusiveUpper 
447 && (fieldIndex < 
schema.getMaxFields() || inclusiveUpper || exclusiveLower) ) ) ) {
448key[offset++] = sepByte;
449// Set lastInclusiveUpperSingleKey back to false if this is 
the last pk column
450// as we don't want to increment the null byte in this case
451// To test if this is the last pk column we need to consider 
the span of this slot
452// and the field index to see if this slot considers the 
last column
453lastInclusiveUpperSingleKey &= (fieldIndex + slotSpan[i]) < 
schema.getMaxFields()-1;
454   }
{code}

It did not consider the case that the last field is variable length and also 
DESC, when the last field is variable length and also {{DESC}}, the trailiing 
{{0xFF}} is not removed when stored in HBASE, so for such case, we should not 
set {{lastInclusiveUpperSingleKey}} back to false.


> Fix HashJoinMoreIT.testBug2961 failing after PHOENIX-5262
> -
>
> Key: PHOENIX-5307
> URL: https://issues.apache.org/jira/browse/PHOENIX-5307
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Major
> Attachments: PHOENIX-5307_v1-4.x-HBase-1.4.patch
>
>
>  I noticed {{HashJoinMoreIT.testBug2961}} always failed after PHOENIX-5262, 
> and the error stack is different from PHOENIX-5290:
> {code}
> java.lang.AssertionError
>   at 
> org.apache.phoenix.end2end.join.HashJoinMoreIT.testBug2961(HashJoinMoreIT.java:908)
> {code}
> I think this problem is caused by following line 453 modified in PHOENIX-5262:
> {code}
> 445if ( !isFixedWidth && ( sepByte == 
> QueryConstants.DESC_SEPARATOR_BYTE 
> 446|| ( !exclusiveUpper 
> 447 && (fieldIndex < 
> schema.getMaxFields() || inclusiveUpper || exclusiveLower) ) ) ) {
> 448key[offset++] = sepByte;
> 449// Set lastInclusiveUpperSingleKey back to false if this 
> is the last pk column
> 450// as we don't want to increment the null byte in this case
> 451// To test if this is the last pk column we need to 
> consider the span of this slot
> 452// and the field index to see if this slot considers the 
> last column
> 453lastInclusiveUpperSingleKey &= (fieldIndex + slotSpan[i]) 
> < schema.getMaxFields()-1;
> 454   }
> {code}
> It did not consider the case that the last field is variable length and also 
> DESC, when the last field is variable length and also {{DESC}}, the trailii

[jira] [Updated] (PHOENIX-5262) Wrong Result on Salted table with some Variable Length PKs

2019-05-31 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5262:
---
Fix Version/s: 5.1.0

> Wrong Result on Salted table with some Variable Length PKs
> --
>
> Key: PHOENIX-5262
> URL: https://issues.apache.org/jira/browse/PHOENIX-5262
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Daniel Wong
>Assignee: Daniel Wong
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5262.patch, PHOENIX-5262v2.patch, 
> PHOENIX-5262v3.patch
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5262) Wrong Result on Salted table with some Variable Length PKs

2019-05-31 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5262:
---
Fix Version/s: 4.15.0

> Wrong Result on Salted table with some Variable Length PKs
> --
>
> Key: PHOENIX-5262
> URL: https://issues.apache.org/jira/browse/PHOENIX-5262
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Daniel Wong
>Assignee: Daniel Wong
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-5262.patch, PHOENIX-5262v2.patch, 
> PHOENIX-5262v3.patch
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5314) Add Presto connector link to website

2019-05-31 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5314:
--
Attachment: website_presto_connector.patch

> Add Presto connector link to website
> 
>
> Key: PHOENIX-5314
> URL: https://issues.apache.org/jira/browse/PHOENIX-5314
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 5.1.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Minor
> Attachments: publish.diff, website_presto_connector.patch
>
>
> Add a link under Addons



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5314) Add Presto connector link to website

2019-05-31 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5314:
--
Attachment: publish.diff

> Add Presto connector link to website
> 
>
> Key: PHOENIX-5314
> URL: https://issues.apache.org/jira/browse/PHOENIX-5314
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 5.1.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Minor
> Attachments: publish.diff, website_presto_connector.patch
>
>
> Add a link under Addons



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5314) Add Presto connector link to website

2019-05-31 Thread Vincent Poon (JIRA)
Vincent Poon created PHOENIX-5314:
-

 Summary: Add Presto connector link to website
 Key: PHOENIX-5314
 URL: https://issues.apache.org/jira/browse/PHOENIX-5314
 Project: Phoenix
  Issue Type: Task
Affects Versions: 5.1.0
Reporter: Vincent Poon
Assignee: Vincent Poon
 Attachments: publish.diff, website_presto_connector.patch

Add a link under Addons



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5313) All mappers grab all RegionLocations from .META

2019-05-31 Thread Geoffrey Jacoby (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5313:
-
Description: 
Phoenix's MapReduce integration lives in PhoenixInputFormat. It implements 
getSplits by calculating a QueryPlan for the provided SELECT query, and each 
split gets a mapper. As part of this QueryPlan generation, we grab all 
RegionLocations from .META

In PhoenixInputFormat:getQueryPlan: 
{code:java}
 // Initialize the query plan so it sets up the parallel scans
 queryPlan.iterator(MapReduceParallelScanGrouper.getInstance());
{code}

In MapReduceParallelScanGrouper.getRegionBoundaries()
{code:java}
return context.getConnection().getQueryServices().getAllTableRegions(tableName);
{code}

This is fine.

Unfortunately, each mapper Task spawned by the job will go through this _same_ 
exercise. It will pass a MapReduceParallelScanGrouper to queryPlan.iterator(), 
which I believe is eventually causing getRegionBoundaries to get called when 
the scans are initialized in the result iterator.

Since HBase 1.x and up got rid of .META prefetching and caching within the 
HBase client, that means that not only will each _Job_ make potentially 
thousands of calls to .META, potentially thousands of _Tasks_ will each make 
potentially thousands of calls to .META. 

We should get a QueryPlan and setup the scans without having to read all 
RegionLocations, either by using the mapper's internal knowledge of its split 
key range, or by serializing the query plan from the client and sending it to 
the mapper tasks for use there. 

Note that MapReduce tasks over snapshots are not affected by this, because 
region locations are stored in the snapshot manifest. 

  was:
Phoenix's MapReduce integration lives in PhoenixInputFormat. It implements 
getSplits by calculating a QueryPlan for the provided SELECT query, and each 
split gets a mapper. As part of this QueryPlan generation, we grab all 
RegionLocations from .META

In PhoenixInputFormat:getQueryPlan: 
{code:java}
 // Initialize the query plan so it sets up the parallel scans
 queryPlan.iterator(MapReduceParallelScanGrouper.getInstance());
{code}

In MapReduceParallelScanGrouper.getRegionBoundaries()
{code:java}
return context.getConnection().getQueryServices().getAllTableRegions(tableName);
{code}

This is fine.

Unfortunately, each mapper Task spawned by the job will go through this _same_ 
exercise when trying to create the RecordReader. Since HBase 1.x and up got rid 
of .META prefetching and caching within the HBase client, that means that not 
only will each _Job_ make potentially thousands of calls to .META, potentially 
thousands of _Tasks_ will do the same. 

The createRecordReader should get a QueryPlan without having to read all 
RegionLocations, either by using its internal knowledge of its split key range, 
or by serializing the query plan from the client and sending it to the mapper 
tasks for use there. 

Note that MapReduce tasks over snapshots are not affected by this, because 
region locations are stored in the snapshot manifest. 


> All mappers grab all RegionLocations from .META
> ---
>
> Key: PHOENIX-5313
> URL: https://issues.apache.org/jira/browse/PHOENIX-5313
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Geoffrey Jacoby
>Priority: Major
>
> Phoenix's MapReduce integration lives in PhoenixInputFormat. It implements 
> getSplits by calculating a QueryPlan for the provided SELECT query, and each 
> split gets a mapper. As part of this QueryPlan generation, we grab all 
> RegionLocations from .META
> In PhoenixInputFormat:getQueryPlan: 
> {code:java}
>  // Initialize the query plan so it sets up the parallel scans
>  queryPlan.iterator(MapReduceParallelScanGrouper.getInstance());
> {code}
> In MapReduceParallelScanGrouper.getRegionBoundaries()
> {code:java}
> return 
> context.getConnection().getQueryServices().getAllTableRegions(tableName);
> {code}
> This is fine.
> Unfortunately, each mapper Task spawned by the job will go through this 
> _same_ exercise. It will pass a MapReduceParallelScanGrouper to 
> queryPlan.iterator(), which I believe is eventually causing 
> getRegionBoundaries to get called when the scans are initialized in the 
> result iterator.
> Since HBase 1.x and up got rid of .META prefetching and caching within the 
> HBase client, that means that not only will each _Job_ make potentially 
> thousands of calls to .META, potentially thousands of _Tasks_ will each make 
> potentially thousands of calls to .META. 
> We should get a QueryPlan and setup the scans without having to read all 
> RegionLocations, either by using the mapper's internal knowledge of its split 
> key range, or by serializing the query plan from the client and sending it to 
> the mapper tasks for use there. 
> Note that MapR

[jira] [Created] (PHOENIX-5313) All mappers grab all RegionLocations from .META

2019-05-31 Thread Geoffrey Jacoby (JIRA)
Geoffrey Jacoby created PHOENIX-5313:


 Summary: All mappers grab all RegionLocations from .META
 Key: PHOENIX-5313
 URL: https://issues.apache.org/jira/browse/PHOENIX-5313
 Project: Phoenix
  Issue Type: Bug
Reporter: Geoffrey Jacoby


Phoenix's MapReduce integration lives in PhoenixInputFormat. It implements 
getSplits by calculating a QueryPlan for the provided SELECT query, and each 
split gets a mapper. As part of this QueryPlan generation, we grab all 
RegionLocations from .META

In PhoenixInputFormat:getQueryPlan: 
{code:java}
 // Initialize the query plan so it sets up the parallel scans
 queryPlan.iterator(MapReduceParallelScanGrouper.getInstance());
{code}

In MapReduceParallelScanGrouper.getRegionBoundaries()
{code:java}
return context.getConnection().getQueryServices().getAllTableRegions(tableName);
{code}

This is fine.

Unfortunately, each mapper Task spawned by the job will go through this _same_ 
exercise when trying to create the RecordReader. Since HBase 1.x and up got rid 
of .META prefetching and caching within the HBase client, that means that not 
only will each _Job_ make potentially thousands of calls to .META, potentially 
thousands of _Tasks_ will do the same. 

The createRecordReader should get a QueryPlan without having to read all 
RegionLocations, either by using its internal knowledge of its split key range, 
or by serializing the query plan from the client and sending it to the mapper 
tasks for use there. 

Note that MapReduce tasks over snapshots are not affected by this, because 
region locations are stored in the snapshot manifest. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5312) Publish official Phoenix docker image

2019-05-31 Thread Vincent Poon (JIRA)
Vincent Poon created PHOENIX-5312:
-

 Summary: Publish official Phoenix docker image
 Key: PHOENIX-5312
 URL: https://issues.apache.org/jira/browse/PHOENIX-5312
 Project: Phoenix
  Issue Type: Wish
Affects Versions: 5.1.0
Reporter: Vincent Poon


Provide a canonical image to make it easy for new users to download and 
immediately run and play around with Phoenix.
This is also the first step in using tools like 
[docker-client|https://github.com/spotify/docker-client] to run integration 
tests against a docker image.
Other projects like the Presto-phoenix connector could then also execute tests 
against released images.

Ideally, we publish the image on docker hub as an ["Official 
Image"|https://docs.docker.com/docker-hub/official_images/]




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5290) HashJoinMoreIT is failing

2019-05-31 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved PHOENIX-5290.

Resolution: Duplicate

> HashJoinMoreIT is failing
> -
>
> Key: PHOENIX-5290
> URL: https://issues.apache.org/jira/browse/PHOENIX-5290
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 4.14.1, 5.1.0
>Reporter: Lars Hofhansl
>Priority: Major
>
> {code}
> [INFO] Running org.apache.phoenix.end2end.join.HashJoinMoreIT
> [ERROR] Tests run: 8, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 91.509 s <<< FAILURE! - in org.apache.phoenix.end2end.join.HashJoinMoreIT
> [ERROR] testBug2961(org.apache.phoenix.end2end.join.HashJoinMoreIT)  Time 
> elapsed: 2.42 s  <<< ERROR!
> java.lang.IllegalArgumentException: 6 > 5
> at 
> org.apache.phoenix.end2end.join.HashJoinMoreIT.testBug2961(HashJoinMoreIT.java:898)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5290) HashJoinMoreIT is failing

2019-05-31 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved PHOENIX-5290.

Resolution: Fixed

Closing as dup of PHOENIX-5307

> HashJoinMoreIT is failing
> -
>
> Key: PHOENIX-5290
> URL: https://issues.apache.org/jira/browse/PHOENIX-5290
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 4.14.1, 5.1.0
>Reporter: Lars Hofhansl
>Priority: Major
>
> {code}
> [INFO] Running org.apache.phoenix.end2end.join.HashJoinMoreIT
> [ERROR] Tests run: 8, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 91.509 s <<< FAILURE! - in org.apache.phoenix.end2end.join.HashJoinMoreIT
> [ERROR] testBug2961(org.apache.phoenix.end2end.join.HashJoinMoreIT)  Time 
> elapsed: 2.42 s  <<< ERROR!
> java.lang.IllegalArgumentException: 6 > 5
> at 
> org.apache.phoenix.end2end.join.HashJoinMoreIT.testBug2961(HashJoinMoreIT.java:898)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (PHOENIX-5290) HashJoinMoreIT is failing

2019-05-31 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reopened PHOENIX-5290:


> HashJoinMoreIT is failing
> -
>
> Key: PHOENIX-5290
> URL: https://issues.apache.org/jira/browse/PHOENIX-5290
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 4.14.1, 5.1.0
>Reporter: Lars Hofhansl
>Priority: Major
>
> {code}
> [INFO] Running org.apache.phoenix.end2end.join.HashJoinMoreIT
> [ERROR] Tests run: 8, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 91.509 s <<< FAILURE! - in org.apache.phoenix.end2end.join.HashJoinMoreIT
> [ERROR] testBug2961(org.apache.phoenix.end2end.join.HashJoinMoreIT)  Time 
> elapsed: 2.42 s  <<< ERROR!
> java.lang.IllegalArgumentException: 6 > 5
> at 
> org.apache.phoenix.end2end.join.HashJoinMoreIT.testBug2961(HashJoinMoreIT.java:898)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5275) Remove accidental imports from curator-client-2.12.0

2019-05-31 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen updated PHOENIX-5275:
--
Attachment: PHOENIX-5275.master.v2.patch

> Remove accidental imports from curator-client-2.12.0
> 
>
> Key: PHOENIX-5275
> URL: https://issues.apache.org/jira/browse/PHOENIX-5275
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jacob Isaac
>Assignee: William Shen
>Priority: Minor
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5275.master.v1.patch, 
> PHOENIX-5275.master.v2.patch
>
>
> The following imports 
> import org.apache.curator.shaded.com.google.common.*
> were accidentally introduced in
> phoenix-core/src/test/java/org/apache/phoenix/query/QueryServicesTestImpl.java
> phoenix-core/src/it/java/org/apache/phoenix/end2end/UpgradeIT.java
> phoenix-core/src/test/java/org/apache/phoenix/compile/WhereOptimizerTest.java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] Maintaining the Site in Git Instead of SVN

2019-05-31 Thread William Shen
Thomas,

Which release line do we currently base our documentation on? Do you think
it makes sense to bring the site source into master, and always update the
site from master?

- Will

On Thu, May 30, 2019 at 8:46 PM Thomas D'Silva
 wrote:

> Currently this would not be easy to do since we have multiple branches. If
> we decide to
> implement Lars' proposal to have a single branch and a module per supported
> HBase version
> then we could have a module for the website as well.
>
> On Thu, May 30, 2019 at 7:03 PM swaroopa kadam  >
> wrote:
>
> > Huge +1!
> >
> > On Thu, May 30, 2019 at 4:38 PM William Shen  >
> > wrote:
> >
> > > Hi all,
> > >
> > > Currently, the Phoenix site is maintained in and built from SVN
> > > . Not sure what level
> of
> > > work it would require, but does it make sense to move the source from
> svn
> > > to git, so contribution to the website can follow the same JIRA/git
> > > workflow as the rest of the project? It could also make sure changes to
> > > Phoenix code are checked in with corresponding documentation changes
> when
> > > needed.
> > >
> > > - Will
> > >
> > --
> >
> >
> > Swaroopa Kadam
> > [image: https://]about.me/swaroopa_kadam
> > <
> >
> https://about.me/swaroopa_kadam?promo=email_sig&utm_source=product&utm_medium=email_sig&utm_campaign=gmail_api
> > >
> >
>


[jira] [Created] (PHOENIX-5311) Integration tests leak tables when running on distributed cluster

2019-05-31 Thread JIRA
István Tóth created PHOENIX-5311:


 Summary: Integration tests leak tables when running on distributed 
cluster
 Key: PHOENIX-5311
 URL: https://issues.apache.org/jira/browse/PHOENIX-5311
 Project: Phoenix
  Issue Type: Bug
Reporter: István Tóth


When integration test suite is run via End2EndTestDriver on a distributed 
cluster, most tests do not clean up their tables, leaving thousands of tables 
on the cluster, and exhausting RegionServer memory.

There are actually three problems:
 * The BaseTest.freeResourcesIfBeyondThreshold() method is called after most 
tests, and it restarts the MiniCluster, thus freeing resources, but it has no 
effect when running on a distributed cluster.
 * The TestDriver sets phoenix.schema.dropMetaData to false by default, so even 
if the Phoenix tables are dropped, the HBASE tables are not, so the table leak 
remains. 
 * The phoenix.schema.dropMetaData setting cannot be easily overridden because 
of PHOENIX-5310

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5310) ReadOnlyProps iterator does not return all properties

2019-05-31 Thread JIRA
István Tóth created PHOENIX-5310:


 Summary: ReadOnlyProps iterator does not return all properties
 Key: PHOENIX-5310
 URL: https://issues.apache.org/jira/browse/PHOENIX-5310
 Project: Phoenix
  Issue Type: Bug
Reporter: István Tóth


The asMap(), iterate(), isEmpty() methods in 
org.apache.phoenix.util.ReadOnlyProps ignore both the contents of the 
overrideProps variable as well as the variable substitution logic.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)