[jira] [Updated] (PHOENIX-6590) Handle rollbacks in phoenix spark connector and add way to control batch wise or task wise transactions

2022-08-26 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-6590:
---
Fix Version/s: connectors-6.0.0

> Handle rollbacks in phoenix spark connector and add way to control batch wise 
> or task wise transactions
> ---
>
> Key: PHOENIX-6590
> URL: https://issues.apache.org/jira/browse/PHOENIX-6590
> Project: Phoenix
>  Issue Type: Sub-task
>  Components: spark-connector
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: connectors-6.0.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6411) Verify and add transactions usability in Phoenix Spark connector

2022-08-25 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-6411:
---
Fix Version/s: connectors-6.0.0

> Verify and add transactions usability in Phoenix Spark connector
> 
>
> Key: PHOENIX-6411
> URL: https://issues.apache.org/jira/browse/PHOENIX-6411
> Project: Phoenix
>  Issue Type: Bug
>  Components: spark-connector
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: connectors-6.0.0
>
>
> Currently there is
> 1) no way to disable/enable autocommit for the connections creating 
> internally within the readers/writers and 
> 2) commits are getting called internally when batch size reaches which will 
> not give control to application developer.
> 3) No way to explicit commits (write on Dataset should call commit explicitly 
> any way)
> Avoiding these may help to use transactions with in the connector.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6710) Revert PHOENIX-3842 Turn on back default bloomFilter for Phoenix Tables

2022-05-10 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-6710:
---
Fix Version/s: 5.1.3

> Revert PHOENIX-3842 Turn on back default bloomFilter for Phoenix Tables
> ---
>
> Key: PHOENIX-6710
> URL: https://issues.apache.org/jira/browse/PHOENIX-6710
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.11.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 5.2.0, 5.1.3
>
>
> PHOENIX-3842 was done to workaround PHOENIX-3797  to unblock a release, and 
> with the assumption that Phoenix is not used for GETs.
>  
> At one of our users, we saw that they have been doing heavy GETs in their 
> custom coprocessor to check if the key is present or not in the current. At 
> most 99% of the time, the key is not expected to be present during the 
> initial load as keys are expected to be random, but there is still some 
> chance that there is 1% of keys would be duplicated. But in the absence of 
> BloomFilter, HBase has to seek HFile to confirm if the key is not present, 
> which results in regression in performance for about 2x slower.
>  
> Even in use cases like Index maintenance and "ON DUPLICATE KEY" queries will 
> also be impacted without bloom filters.
>  
> As Phoenix is still used for GETs by the users (SELECT query with key as a 
> filter). and we also have constructs that intrinsically do GETs like Index 
> maintenance and
> "On Duplicate key". So I believe it is always better to have a bloom filter 
> should be "ON" by default as I don't also see any implication of it, even if 
> it is not getting used.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-6710) Revert PHOENIX-3842 Turn on back default bloomFilter for Phoenix Tables

2022-05-10 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-6710:
---
Fix Version/s: 5.2.0

> Revert PHOENIX-3842 Turn on back default bloomFilter for Phoenix Tables
> ---
>
> Key: PHOENIX-6710
> URL: https://issues.apache.org/jira/browse/PHOENIX-6710
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.11.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 5.2.0
>
>
> PHOENIX-3842 was done to workaround PHOENIX-3797  to unblock a release, and 
> with the assumption that Phoenix is not used for GETs.
>  
> At one of our users, we saw that they have been doing heavy GETs in their 
> custom coprocessor to check if the key is present or not in the current. At 
> most 99% of the time, the key is not expected to be present during the 
> initial load as keys are expected to be random, but there is still some 
> chance that there is 1% of keys would be duplicated. But in the absence of 
> BloomFilter, HBase has to seek HFile to confirm if the key is not present, 
> which results in regression in performance for about 2x slower.
>  
> Even in use cases like Index maintenance and "ON DUPLICATE KEY" queries will 
> also be impacted without bloom filters.
>  
> As Phoenix is still used for GETs by the users (SELECT query with key as a 
> filter). and we also have constructs that intrinsically do GETs like Index 
> maintenance and
> "On Duplicate key". So I believe it is always better to have a bloom filter 
> should be "ON" by default as I don't also see any implication of it, even if 
> it is not getting used.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (PHOENIX-6710) Revert PHOENIX-3842 Turn on back default bloomFilter for Phoenix Tables

2022-05-10 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-6710.

Resolution: Fixed

> Revert PHOENIX-3842 Turn on back default bloomFilter for Phoenix Tables
> ---
>
> Key: PHOENIX-6710
> URL: https://issues.apache.org/jira/browse/PHOENIX-6710
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.11.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 5.2.0
>
>
> PHOENIX-3842 was done to workaround PHOENIX-3797  to unblock a release, and 
> with the assumption that Phoenix is not used for GETs.
>  
> At one of our users, we saw that they have been doing heavy GETs in their 
> custom coprocessor to check if the key is present or not in the current. At 
> most 99% of the time, the key is not expected to be present during the 
> initial load as keys are expected to be random, but there is still some 
> chance that there is 1% of keys would be duplicated. But in the absence of 
> BloomFilter, HBase has to seek HFile to confirm if the key is not present, 
> which results in regression in performance for about 2x slower.
>  
> Even in use cases like Index maintenance and "ON DUPLICATE KEY" queries will 
> also be impacted without bloom filters.
>  
> As Phoenix is still used for GETs by the users (SELECT query with key as a 
> filter). and we also have constructs that intrinsically do GETs like Index 
> maintenance and
> "On Duplicate key". So I believe it is always better to have a bloom filter 
> should be "ON" by default as I don't also see any implication of it, even if 
> it is not getting used.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-6710) Revert PHOENIX-3842 Turn on back default bloomFilter for Phoenix Tables

2022-05-09 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-6710:
---
Description: 
PHOENIX-3842 was done to workaround PHOENIX-3797  to unblock a release, and 
with the assumption that Phoenix is not used for GETs.

 

At one of our users, we saw that they have been doing heavy GETs in their 
custom coprocessor to check if the key is present or not in the current. At 
most 99% of the time, the key is not expected to be present during the initial 
load as keys are expected to be random, but there is still some chance that 
there is 1% of keys would be duplicated. But in the absence of BloomFilter, 
HBase has to seek HFile to confirm if the key is not present, which results in 
regression in performance for about 2x slower.

 

Even in use cases like Index maintenance and "ON DUPLICATE KEY" queries will 
also be impacted without bloom filters.

 

As Phoenix is still used for GETs by the users (SELECT query with key as a 
filter). and we also have constructs that intrinsically do GETs like Index 
maintenance and
"On Duplicate key". So I believe it is always better to have a bloom filter 
should be "ON" by default as I don't also see any implication of it, even if it 
is not getting used.

 

  was:
It looks like PHOENIX-3842 was done to workaround PHOENIX-3797 in order to 
unblock a release, and it was assumed that Phoenix is not used for GETs.

 

At one of our users, we saw that they have been doing heavy GETs in their 
custom coprocessor to check if the key is present or not in the current. At 
most 99% of the time, the key is not expected to be present as the load initial 
and keys are expected to be random, but there is still some chance that there 
is 1% of keys would be duplicated. But in the absence of BloomFilter, HBase has 
to seek HFile to confirm if the key is not present, which results in regression 
in performance for about 2x slower.

 

Even in use cases like Index maintenance and "ON DUPLICATE KEY" queries will 
also be impacted without bloom filters.

 

As Phoenix is still used for GETs by the users. and we also have constructs 
that intrinsically do GETs like Index maintenance and others. So I believe it 
is always better to have a bloom filter that should be "ON" by default as I 
don't see any implication of keeping it ON, even if it is not getting used.

 


> Revert PHOENIX-3842 Turn on back default bloomFilter for Phoenix Tables
> ---
>
> Key: PHOENIX-6710
> URL: https://issues.apache.org/jira/browse/PHOENIX-6710
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.11.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
>
> PHOENIX-3842 was done to workaround PHOENIX-3797  to unblock a release, and 
> with the assumption that Phoenix is not used for GETs.
>  
> At one of our users, we saw that they have been doing heavy GETs in their 
> custom coprocessor to check if the key is present or not in the current. At 
> most 99% of the time, the key is not expected to be present during the 
> initial load as keys are expected to be random, but there is still some 
> chance that there is 1% of keys would be duplicated. But in the absence of 
> BloomFilter, HBase has to seek HFile to confirm if the key is not present, 
> which results in regression in performance for about 2x slower.
>  
> Even in use cases like Index maintenance and "ON DUPLICATE KEY" queries will 
> also be impacted without bloom filters.
>  
> As Phoenix is still used for GETs by the users (SELECT query with key as a 
> filter). and we also have constructs that intrinsically do GETs like Index 
> maintenance and
> "On Duplicate key". So I believe it is always better to have a bloom filter 
> should be "ON" by default as I don't also see any implication of it, even if 
> it is not getting used.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-6710) Revert PHOENIX-3842 Turn on back default bloomFilter for Phoenix Tables

2022-05-09 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-6710:
---
Description: 
It looks like PHOENIX-3842 was done to workaround PHOENIX-3797 in order to 
unblock a release, and it was assumed that Phoenix is not used for GETs.

 

At one of our users, we saw that they have been doing heavy GETs in their 
custom coprocessor to check if the key is present or not in the current. At 
most 99% of the time, the key is not expected to be present as the load initial 
and keys are expected to be random, but there is still some chance that there 
is 1% of keys would be duplicated. But in the absence of BloomFilter, HBase has 
to seek HFile to confirm if the key is not present, which results in regression 
in performance for about 2x slower.

 

Even in use cases like Index maintenance and "ON DUPLICATE KEY" queries will 
also be impacted without bloom filters.

 

As Phoenix is still used for GETs by the users. and we also have constructs 
that intrinsically do GETs like Index maintenance and others. So I believe it 
is always better to have a bloom filter that should be "ON" by default as I 
don't see any implication of keeping it ON, even if it is not getting used.

 

  was:
It looks like PHOENIX-3842 was done to workaround PHOENIX-3797 in order to 
unblock a release, and it was assumed that Phoenix is not used for GETs.

 

At one of our users, we saw that they have been doing heavy GETs in their 
custom coprocessor to check if the key is present or not in the current. At 
most 99% of the time, the key is not expected to be present as the load initial 
and keys are expected to be random, but there is still some chance that there 
is 1% of keys would be duplicated. But in the absence of BloomFilter, HBase has 
to seek HFile to confirm if the key is not present, which results in regression 
in performance for about 2x slower.

 

Even in use cases like Index maintenance and "ON DUPLICATE KEY" queries will 
also be impacted without bloom filters.

 

As Phoenix is still used for GETs by the users. and we also have constructs 
that intrinsically do GETs like Index maintenance and others. So I believe it 
is always better to have a bloom filter should "ON" by default as I don't see 
any implication of it getting on even if it is not getting used.

 


> Revert PHOENIX-3842 Turn on back default bloomFilter for Phoenix Tables
> ---
>
> Key: PHOENIX-6710
> URL: https://issues.apache.org/jira/browse/PHOENIX-6710
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.11.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
>
> It looks like PHOENIX-3842 was done to workaround PHOENIX-3797 in order to 
> unblock a release, and it was assumed that Phoenix is not used for GETs.
>  
> At one of our users, we saw that they have been doing heavy GETs in their 
> custom coprocessor to check if the key is present or not in the current. At 
> most 99% of the time, the key is not expected to be present as the load 
> initial and keys are expected to be random, but there is still some chance 
> that there is 1% of keys would be duplicated. But in the absence of 
> BloomFilter, HBase has to seek HFile to confirm if the key is not present, 
> which results in regression in performance for about 2x slower.
>  
> Even in use cases like Index maintenance and "ON DUPLICATE KEY" queries will 
> also be impacted without bloom filters.
>  
> As Phoenix is still used for GETs by the users. and we also have constructs 
> that intrinsically do GETs like Index maintenance and others. So I believe it 
> is always better to have a bloom filter that should be "ON" by default as I 
> don't see any implication of keeping it ON, even if it is not getting used.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (PHOENIX-6710) Revert PHOENIX-3842 Turn on back default bloomFilter for Phoenix Tables

2022-05-09 Thread Ankit Singhal (Jira)
Ankit Singhal created PHOENIX-6710:
--

 Summary: Revert PHOENIX-3842 Turn on back default bloomFilter for 
Phoenix Tables
 Key: PHOENIX-6710
 URL: https://issues.apache.org/jira/browse/PHOENIX-6710
 Project: Phoenix
  Issue Type: Bug
  Components: core
Affects Versions: 4.11.0
Reporter: Ankit Singhal
Assignee: Ankit Singhal


It looks like PHOENIX-3842 was done to workaround PHOENIX-3797 in order to 
unblock a release, and it was assumed that Phoenix is not used for GETs.

 

At one of our users, we saw that they have been doing heavy GETs in their 
custom coprocessor to check if the key is present or not in the current. At 
most 99% of the time, the key is not expected to be present as the load initial 
and keys are expected to be random, but there is still some chance that there 
is 1% of keys would be duplicated. But in the absence of BloomFilter, HBase has 
to seek HFile to confirm if the key is not present, which results in regression 
in performance for about 2x slower.

 

Even in use cases like Index maintenance and "ON DUPLICATE KEY" queries will 
also be impacted without bloom filters.

 

As Phoenix is still used for GETs by the users. and we also have constructs 
that intrinsically do GETs like Index maintenance and others. So I believe it 
is always better to have a bloom filter should "ON" by default as I don't see 
any implication of it getting on even if it is not getting used.

 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (PHOENIX-6354) Update to spark 3.0

2022-02-08 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal reassigned PHOENIX-6354:
--

Assignee: (was: Ashwin Balasubramani)

> Update to spark 3.0
> ---
>
> Key: PHOENIX-6354
> URL: https://issues.apache.org/jira/browse/PHOENIX-6354
> Project: Phoenix
>  Issue Type: Improvement
>  Components: spark-connector
>Reporter: Alejandro Anadon
>Priority: Major
>
> I am trying to use the phoenix spark connector with spark 3.0.
> I tried to compile from trunk; but it is still configured with 2.4.0.
> Is is not as easy (I am naive) as change the pon.xml from :
> 2.4.0
> to
> 3.0.1
>  
> I found that in class :
> org.apache.phoenix.spark.datasource.v2.PhoenixDataSource is still using the 
> old DataSourcesV2 API; and I don't have the knowledge for updating it in a 
> brand.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (PHOENIX-6632) Migrate connectors to Spark-3

2022-02-08 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal reassigned PHOENIX-6632:
--

Assignee: Ashwin Balasubramani

> Migrate connectors to Spark-3
> -
>
> Key: PHOENIX-6632
> URL: https://issues.apache.org/jira/browse/PHOENIX-6632
> Project: Phoenix
>  Issue Type: Improvement
>  Components: spark-connector
>Affects Versions: connectors-6.0.0
>Reporter: Ashwin Balasubramani
>Assignee: Ashwin Balasubramani
>Priority: Major
>
> With Spark-3, the DatasourceV2 API has had major changes, where a new 
> TableProvider Interface has been introduced. These new changes bring in more 
> control to the data source developer and better integration with 
> spark-optimizer.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Assigned] (PHOENIX-6354) Update to spark 3.0

2022-02-08 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal reassigned PHOENIX-6354:
--

Assignee: Ashwin Balasubramani

> Update to spark 3.0
> ---
>
> Key: PHOENIX-6354
> URL: https://issues.apache.org/jira/browse/PHOENIX-6354
> Project: Phoenix
>  Issue Type: Improvement
>  Components: spark-connector
>Reporter: Alejandro Anadon
>Assignee: Ashwin Balasubramani
>Priority: Major
>
> I am trying to use the phoenix spark connector with spark 3.0.
> I tried to compile from trunk; but it is still configured with 2.4.0.
> Is is not as easy (I am naive) as change the pon.xml from :
> 2.4.0
> to
> 3.0.1
>  
> I found that in class :
> org.apache.phoenix.spark.datasource.v2.PhoenixDataSource is still using the 
> old DataSourcesV2 API; and I don't have the knowledge for updating it in a 
> brand.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (PHOENIX-6610) [Phoenix-connectors] Upgrade Log4j dependency to address CVE-2021-44228

2021-12-14 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-6610.

Resolution: Duplicate

> [Phoenix-connectors] Upgrade Log4j dependency to address CVE-2021-44228 
> 
>
> Key: PHOENIX-6610
> URL: https://issues.apache.org/jira/browse/PHOENIX-6610
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (PHOENIX-6610) [Phoenix-connectors] Upgrade Log4j dependency to address CVE-2021-44228

2021-12-13 Thread Ankit Singhal (Jira)
Ankit Singhal created PHOENIX-6610:
--

 Summary: [Phoenix-connectors] Upgrade Log4j dependency to address 
CVE-2021-44228 
 Key: PHOENIX-6610
 URL: https://issues.apache.org/jira/browse/PHOENIX-6610
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal
Assignee: Ankit Singhal






--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-5566) Remove useless LocalTableState.trackedColumns

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5566:
---
Fix Version/s: (was: 5.1.2)

> Remove useless LocalTableState.trackedColumns 
> --
>
> Key: PHOENIX-5566
> URL: https://issues.apache.org/jira/browse/PHOENIX-5566
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0, 5.1.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
> Attachments: PHOENIX-5566_v1-4.x-HBase-1.4.patch
>
>
> I found that LocalTableState.trackedColumns  is useless now, so we can remove 
> it and make the  index building code more readable.  IMHO, the index building 
> code is hard to understand 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-4830) order by primary key desc return wrong results

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4830:
---
Fix Version/s: (was: 5.1.2)

> order by primary key desc return wrong results
> --
>
> Key: PHOENIX-4830
> URL: https://issues.apache.org/jira/browse/PHOENIX-4830
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
> Environment: phoenix-4.14-hbase-1.2
>Reporter: JieChen
>Assignee: Xu Cang
>Priority: Major
>  Labels: DESC
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
> Attachments: PHOENIX-4830-4.x-HBase-1.3.001.patch, 
> PHOENIX-4830-4.x-HBase-1.3.002.patch, PHOENIX-4830-4.x-HBase-1.3.003.patch, 
> PHOENIX-4830-4.x-HBase-1.3.004.patch, PHOENIX-4830-4.x-HBase-1.3.005.patch, 
> PHOENIX-4830-4.x-HBase-1.3.006.patch, PHOENIX-4830-4.x-HBase-1.3.007.patch, 
> PHOENIX-4830-4.x-HBase-1.3.007.patch, PHOENIX-4830-4.x-HBase-1.3.008.patch
>
>
> {code:java}
> 0: jdbc:phoenix:localhost>  create table test(id bigint not null primary key, 
> a bigint);
> No rows affected (1.242 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(1,11);
> 1 row affected (0.01 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(2,22);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(3,33);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> select * from test;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 1   | 11  |
> | 2   | 22  |
> | 3   | 33  |
> +-+-+
> 3 rows selected (0.015 seconds)
> 0: jdbc:phoenix:localhost> select * from test order by id desc limit 2 offset 
> 0;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 3   | 33  |
> | 2   | 22  |
> +-+-+
> 2 rows selected (0.018 seconds)
> 0: jdbc:phoenix:localhost> select * from test where id in (select id from 
> test ) order by id desc limit 2 offset 0;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 2   | 22  |
> | 1   | 11  |
> +-+-+
> wrong results. 
> {code}
> there may be some errors. ScanUtil.setupReverseScan code.
>  then
> {code:java}
> 0: jdbc:phoenix:localhost> upsert into test values(4,33);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(5,23);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(6,23);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(7,33);
> 1 row affected (0.006 seconds)
> {code}
> execute sql
> {code:java}
> select * from test where id in (select id from test where a=33) order by id 
> desc;
> {code}
> throw exception
> {code:java}
> Error: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TEST,,1533266754845.b8e521d4dc8e8b8f18c69cc7ef76973d.: The next hint must 
> come after previous hint 
> (prev=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=\x80\x00\x00\x00\x00\x00\x00\x06/0:\x00\x00\x00\x00/1533266778944/Put/vlen=1/seqid=9)
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
> at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
> at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
> at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.nextRaw(HashJoinRegionScanner.java:264)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2541)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
> Caused by: java.lang.IllegalStateException: The next hint must come after 
> previous hint 
> (prev=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=\x80\x00\x00\x00\x00\x00\x00\x06/0:\x00\x00\x00\x00/1533266778944/Put/vlen=1/seqid=9)
> at 
> org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
> at 
> 

[jira] [Updated] (PHOENIX-4861) While adding a view column make a single RPC to update the encoded column qualifier counter and remove the table from the cache of the physical table

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4861:
---
Fix Version/s: (was: 5.1.2)

> While adding a view column make a single RPC to update the encoded column 
> qualifier counter and remove the table from the cache of the physical table 
> --
>
> Key: PHOENIX-4861
> URL: https://issues.apache.org/jira/browse/PHOENIX-4861
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Thomas D'Silva
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> For tables that use column encoding when we add a column to a view we need to 
> update the encoded column qualifier counter on the base table. Currently we 
> do this in two rpcs:
> {code}
> // there should only be remote mutations if we are 
> creating a view that uses
> // encoded column qualifiers (the remote mutations are to 
> update the encoded
> // column qualifier counter on the parent table)
> if (parentTable != null && tableType == PTableType.VIEW 
> && parentTable
> .getEncodingScheme() != 
> QualifierEncodingScheme.NON_ENCODED_QUALIFIERS) {
> response =
> processRemoteRegionMutations(
> 
> PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES,
> remoteMutations, 
> MetaDataProtos.MutationCode.UNABLE_TO_UPDATE_PARENT_TABLE);
> clearParentTableFromCache(clientTimeStamp,
> parentTable.getSchemaName() != null
> ? parentTable.getSchemaName().getBytes()
> : ByteUtil.EMPTY_BYTE_ARRAY,
> parentTable.getName().getBytes());
> if (response != null) {
> done.run(response);
> return;
> }
> }
> {code}
> Move this code to MetadataClient



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-4266) Avoid scanner caching in Phoenix

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4266:
---
Fix Version/s: (was: 5.1.2)

> Avoid scanner caching in Phoenix
> 
>
> Key: PHOENIX-4266
> URL: https://issues.apache.org/jira/browse/PHOENIX-4266
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> Phoenix tries to set caching on all scans. On HBase versions before 0.98 that 
> made sense, now it is the wrong thing to do.
> HBase will by default do size based chunking. Setting scanner caching 
> prevents HBase doing this work.
> We should avoid scanner everywhere, and only use in cases where we know the 
> number of rows to be returned (and that number is small).
> [~sergey.soldatov], [~jamestaylor]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5020) PhoenixMRJobSubmitter should use a long timeout when getting candidate jobs

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5020:
---
Fix Version/s: (was: 5.1.2)

> PhoenixMRJobSubmitter should use a long timeout when getting candidate jobs
> ---
>
> Key: PHOENIX-5020
> URL: https://issues.apache.org/jira/browse/PHOENIX-5020
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Minor
>  Labels: SFDC
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> If an environment has a huge System.Catalog (such as having many views) the 
> query in getCandidateJobs can timeout. Because of PHOENIX-4936, this looks 
> like there are no indexes that need an async rebuild. In addition to fixing 
> PHOENIX-4936, we should extend the timeout. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-4216) Figure out why tests randomly fail with master not able to initialize in 200 seconds

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4216:
---
Fix Version/s: (was: 5.1.2)

> Figure out why tests randomly fail with master not able to initialize in 200 
> seconds
> 
>
> Key: PHOENIX-4216
> URL: https://issues.apache.org/jira/browse/PHOENIX-4216
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0, 4.14.3
>Reporter: Samarth Jain
>Priority: Major
>  Labels: phoenix-hardening, precommit, quality-improvement
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
> Attachments: Precommit-3849.log
>
>
> Sample failure:
>  [https://builds.apache.org/job/PreCommit-PHOENIX-Build/1450//testReport/]
> [~apurtell] - Looking at the thread dump in the above link, do you see why 
> master startup failed? I couldn't see any obvious deadlocks
>  
> Exception stacktrace:
> org.apache.hadoop.hbase.regionserver.HRegionServer(2414): Master rejected 
> startup because clock is out of 
> syncorg.apache.hadoop.hbase.regionserver.HRegionServer(2414): Master rejected 
> startup because clock is out of 
> syncorg.apache.hadoop.hbase.ClockOutOfSyncException: 
> org.apache.hadoop.hbase.ClockOutOfSyncException: Server 
> 2a3b1691db3a,42899,1590685404919 has been rejected; Reported time is too far 
> out of sync with master.  Time difference of 1590685396313ms > max allowed of 
> 3ms at 
> org.apache.hadoop.hbase.master.ServerManager.checkClockSkew(ServerManager.java:411)
>  at 
> org.apache.hadoop.hbase.master.ServerManager.regionServerStartup(ServerManager.java:277)
>  at 
> org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:368)
>  at 
> org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8615)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2417) at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:186) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:166)
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:95)
>  at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:85)
>  at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:372)
>  at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:331)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:2412)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:960)
>  at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:158)
>  at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:110)
>  at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:142)
>  at java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:360) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1744)
>  at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:334) 
> at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:139)
>  at java.lang.Thread.run(Thread.java:748)Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.ClockOutOfSyncException):
>  org.apache.hadoop.hbase.ClockOutOfSyncException: Server 
> 2a3b1691db3a,42899,1590685404919 has been rejected; Reported time is too far 
> out of sync with master.  Time difference of 1590685396313ms > max allowed of 
> 3ms at 
> org.apache.hadoop.hbase.master.ServerManager.checkClockSkew(ServerManager.java:411)
>  at 
> org.apache.hadoop.hbase.master.ServerManager.regionServerStartup(ServerManager.java:277)
>  at 
> org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:368)
>  at 
> org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8615)
>  at 

[jira] [Updated] (PHOENIX-5498) When dropping a view, send delete mutations for parent->child links from client to server rather than doing server-server RPCs

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5498:
---
Fix Version/s: (was: 5.1.2)

> When dropping a view, send delete mutations for parent->child links from 
> client to server rather than doing server-server RPCs
> --
>
> Key: PHOENIX-5498
> URL: https://issues.apache.org/jira/browse/PHOENIX-5498
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> Once we are able to generate delete mutations using the child view and parent 
> PTable, we should send the mutations directly from the client to the endpoint 
> coprocessor on SYSTEM.CHILD_LINK rather than doing a server-server RPC from 
> the SYSTEM.CATALOG region to the SYSTEM.CHILD_LINK region.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-4868) Create a column attribute IS_EXCLUDED to denote a dropped derived column and remove LinkType.EXCLUDED_COLUMN

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4868:
---
Fix Version/s: (was: 5.1.2)

> Create a column attribute IS_EXCLUDED to denote a dropped derived column and 
> remove LinkType.EXCLUDED_COLUMN
> 
>
> Key: PHOENIX-4868
> URL: https://issues.apache.org/jira/browse/PHOENIX-4868
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Thomas D'Silva
>Priority: Minor
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5170) Update meta timestamp of parent table when dropping index

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5170:
---
Fix Version/s: (was: 5.1.2)

> Update meta timestamp of parent table when dropping index
> -
>
> Key: PHOENIX-5170
> URL: https://issues.apache.org/jira/browse/PHOENIX-5170
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: gabry
>Priority: Major
>  Labels: phoenix
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
> Attachments: updateParentTableMetaWhenDroppingIndex.patch
>
>
> I have a flume client ,which inserting values to phoenix table with an index 
> named idx_abc.
> When the idx_abc dropped , flume logs WARN message for ever as flows 
> 28 Feb 2019 10:25:55,774 WARN  [hconnection-0x6fb2e162-shared--pool1-t883] 
> (org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.logNoResubmit:1263)
>   - #1, table=PHOENIX:TABLE_ABC, attempt=1/3 failed=6ops, last exception: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 1121 (XCL21): Write to 
> the index failed.  disableIndexOnFailure=true, Failed to write to multiple 
> index tables: [PHOENIX:IDX_ABC] ,serverTimestamp=1551320754540,
> at 
> org.apache.phoenix.util.ServerUtil.wrapInDoNotRetryIOException(ServerUtil.java:265)
> at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:163)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:161)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:145)
> at 
> org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:623)
> at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:583)
> at 
> org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:566)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1034)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1705)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1030)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3394)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2944)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2886)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:715)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2129)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
> Caused by: java.sql.SQLException: ERROR 1121 (XCL21): Write to the index 
> failed.  disableIndexOnFailure=true, Failed to write to multiple index 
> tables: [PHOENIX:IDX_ABC]
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:162)
> ... 21 more
> Caused by: 
> org.apache.phoenix.hbase.index.exception.MultiIndexWriteFailureException:  
> disableIndexOnFailure=true, Failed to write to multiple index tables: 
> [PHOENIX:IDX_ABC]
> at 
> org.apache.phoenix.hbase.index.write.TrackingParallelWriterIndexCommitter.write(TrackingParallelWriterIndexCommitter.java:236)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:195)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:156)
> ... 20 more
>  on 

[jira] [Updated] (PHOENIX-5362) Mappers should use the queryPlan from the driver rather than regenerating the plan

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5362:
---
Fix Version/s: (was: 5.1.2)

> Mappers should use the queryPlan from the driver rather than regenerating the 
> plan
> --
>
> Key: PHOENIX-5362
> URL: https://issues.apache.org/jira/browse/PHOENIX-5362
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> Currently, PhoenixInputFormat#getQueryPlan already generates a queryPlan and 
> we use this plan to get the scans and splits for the MR job. In 
> PhoenixInputFormat#createRecordReader which is called inside each mapper, we 
> again create a queryPlan and pass this to the PhoenixRecordReader instance.
> There are multiple problems with this approach:
> # The mappers already have information about the scans from the driver code. 
> We potentially just need to wrap these scans in an iterator and create a 
> subsequent ResultSet.
> # The mappers don't need most of the information embedded within a queryPlan, 
> so they shouldn't need to regenerate the plan.
> # There are weird corner cases that can occur if we replan the query in each 
> mapper. For ex: If there is an index creation or metadata change in between 
> when the MR job was created, and when the mappers actually launch. In this 
> case, the mappers have the scans created for the first queryPlan, but the 
> mappers will use iterators created for the second queryPlan. In such cases, 
> the issued scans would not match the queryPlan embedded in the mappers' 
> iterators/ResultSet. We could potentially miss some scans/be looking for more 
> than we actually require since we check the original scans for this size. The 
> resolved table would be as per the new queryPlan, and there could be a 
> mismatch here as well (considering the index creation case). There are 
> potentially other repercussions in case of intermediary metadata changes as 
> well.
> Serializing a subset of the information (like the projector, which iterator 
> to use, etc.) of a QueryPlan and passing it from the driver to the mappers 
> without having them regenerate the plans seems like the best way forward.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-4195) PHOENIX-4195 Deleting view rows with extended PKs through the base table silently fails

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4195:
---
Fix Version/s: (was: 5.1.2)

> PHOENIX-4195 Deleting view rows with extended PKs through the base table 
> silently fails
> ---
>
> Key: PHOENIX-4195
> URL: https://issues.apache.org/jira/browse/PHOENIX-4195
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Geoffrey Jacoby
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
> Attachments: test.diff
>
>
> The attached test fails.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-3817) VerifyReplication using SQL

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3817:
---
Fix Version/s: (was: 5.1.2)

> VerifyReplication using SQL
> ---
>
> Key: PHOENIX-3817
> URL: https://issues.apache.org/jira/browse/PHOENIX-3817
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Akshita Malhotra
>Priority: Minor
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
> Attachments: PHOENIX-3817-final.patch, PHOENIX-3817-final2.patch, 
> PHOENIX-3817.v1.patch, PHOENIX-3817.v2.patch, PHOENIX-3817.v3.patch, 
> PHOENIX-3817.v4.patch, PHOENIX-3817.v5.patch, PHOENIX-3817.v6.patch, 
> PHOENIX-3817.v7.patch
>
>
> Certain use cases may copy or replicate a subset of a table to a different 
> table or cluster. For example, application topologies may map data for 
> specific tenants to different peer clusters.
> It would be useful to have a Phoenix VerifyReplication tool that accepts an 
> SQL query, a target table, and an optional target cluster. The tool would 
> compare data returned by the query on the different tables and update various 
> result counters (similar to HBase's VerifyReplication).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5129) Evaluate using same cell as the data cell for storing dynamic column metadata

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5129:
---
Fix Version/s: (was: 5.1.2)

> Evaluate using same cell as the data cell for storing dynamic column metadata
> -
>
> Key: PHOENIX-5129
> URL: https://issues.apache.org/jira/browse/PHOENIX-5129
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> In PHOENIX-374 we use shadow cells to store metadata for dynamic columns in 
> order to be able to project these columns for wildcard queries. More details 
> outlined in the [design 
> doc|https://docs.google.com/document/d/1-N6Z6Id0LzJ457BHT542cxqdKfeZgkFvKGW4xKDPtqs/edit].
> This Jira is to discuss changing the approach so that we can store the 
> metadata in the same cell as the dynamic column data, instead of separate 
> shadow cells. This will help reduce the size of store files since we don't 
> have to store additional rows corresponding to the shadow cell.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-4910) Improvements to spooled MappedByteBufferQueue files

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4910:
---
Fix Version/s: (was: 5.1.2)

> Improvements to spooled MappedByteBufferQueue files
> ---
>
> Key: PHOENIX-4910
> URL: https://issues.apache.org/jira/browse/PHOENIX-4910
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
> Attachments: PHOENIX-4910.001.patch, PHOENIX-4910.002.patch
>
>
> A user ran into a JVM bug which appears to have caused a RegionServer to 
> crash while running a topN aggregate query. This left a large number of files 
> in {{/tmp}} after the RS had gone away (due to a JVM SIGBUS crash). 
> MappedByteBufferQueue will buffer results in memory up to 20MB by default 
> (controlled by {{phoenix.query.spoolThresholdBytes}}) and then start 
> appending them to a file. I'm seeing two things which could be improved:
>  * If the RS exits abnormally, there is no process to clean up files - would 
> be nice to register the {{deleteOnExit()}} hook to try to clean these up.
>  * There is no ability to control where MappedByteBufferQueue writes its 
> spool file - would be nice to use something other than /tmp (I think we have 
> a property to control this already in our config..)
> FYI [~an...@apache.org]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-4846) WhereOptimizer.pushKeyExpressionsToScan() does not work correctly if the sort order of pk columns being filtered on changes

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4846:
---
Fix Version/s: (was: 5.1.2)

> WhereOptimizer.pushKeyExpressionsToScan() does not work correctly if the sort 
> order of pk columns being filtered on changes
> ---
>
> Key: PHOENIX-4846
> URL: https://issues.apache.org/jira/browse/PHOENIX-4846
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Thomas D'Silva
>Priority: Critical
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
> Attachments: PHOENIX-4846-wip.patch
>
>
> {{ExpressionComparabilityWrapper}} should set the sort order based on 
> {{childPart.getColumn()}} or else the attached test throws an 
> IllegalArgumentException
> {code}
> java.lang.IllegalArgumentException: 4 > 3
> at java.util.Arrays.copyOfRange(Arrays.java:3519)
> at 
> org.apache.hadoop.hbase.io.ImmutableBytesWritable.copyBytes(ImmutableBytesWritable.java:272)
> at 
> org.apache.phoenix.compile.WhereOptimizer.getTrailingRange(WhereOptimizer.java:329)
> at 
> org.apache.phoenix.compile.WhereOptimizer.clipRight(WhereOptimizer.java:350)
> at 
> org.apache.phoenix.compile.WhereOptimizer.pushKeyExpressionsToScan(WhereOptimizer.java:237)
> at org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:157)
> at org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:108)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:556)
> {code}
> Also in {{pushKeyExpressionsToScan()}} we cannot extract pk column nodes from 
> the where clause if the sort order of the columns changes. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5066) The TimeZone is incorrectly used during writing or reading data

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5066:
---
Fix Version/s: (was: 5.1.2)

> The TimeZone is incorrectly used during writing or reading data
> ---
>
> Key: PHOENIX-5066
> URL: https://issues.apache.org/jira/browse/PHOENIX-5066
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Jaanai Zhang
>Priority: Critical
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
> Attachments: DateTest.java, PHOENIX-5066.4x.v1.patch, 
> PHOENIX-5066.4x.v2.patch, PHOENIX-5066.4x.v3.patch, 
> PHOENIX-5066.master.v1.patch, PHOENIX-5066.master.v2.patch, 
> PHOENIX-5066.master.v3.patch, PHOENIX-5066.master.v4.patch, 
> PHOENIX-5066.master.v5.patch, PHOENIX-5066.master.v6.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We have two methods to write data when uses JDBC API.
> #1. Uses _the exceuteUpdate_ method to execute a string that is an upsert SQL.
> #2. Uses the _prepareStatement_ method to set some objects and execute.
> The _string_ data needs to convert to a new object by the schema information 
> of tables. we'll use some date formatters to convert string data to object 
> for Date/Time/Timestamp types when writes data and the formatters are used 
> when reads data as well.
>  
> *Uses default timezone test*
>  Writing 3 records by the different ways.
> {code:java}
> UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
> 15:40:47','2018-12-10 15:40:47') 
> UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
> 15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
> stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
> time);stmt.setTimestamp(4, ts);
> {code}
> Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
> {code:java}
> 1 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
> 2 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
> 3 | 2018-12-10 | 15:45:07 | 2018-12-10 15:45:07.66 
> {code}
> Reading the table by the getString methods 
> {code:java}
> 1 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 
> 15:45:07.000 
> 2 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 
> 15:45:07.000 
> 3 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660 | 2018-12-10 
> 07:45:07.660
> {code}
>  *Uses GMT+8 test*
>  Writing 3 records by the different ways.
> {code:java}
> UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
> 15:40:47','2018-12-10 15:40:47')
> UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
> 15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
> stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
> time);stmt.setTimestamp(4, ts);
> {code}
> Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
> {code:java}
> 1 | 2018-12-10 | 23:40:47 | 2018-12-10 23:40:47.0 
> 2 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.0 
> 3 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.106 {code}
> Reading the table by the getString methods
> {code:java}
>  1 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000 | 2018-12-10 
> 23:40:47.000
> 2 | 2018-12-10 15:40:47.000 | 2018-12-10 15:40:47.000 | 2018-12-10 
> 15:40:47.000
> 3 | 2018-12-10 15:40:47.106 | 2018-12-10 15:40:47.106 | 2018-12-10 
> 15:40:47.106
> {code}
>  
> _We_ have a historical problem,  we'll parse the string to 
> Date/Time/Timestamp objects with timezone in #1, which means the actual data 
> is going to be changed when stored in HBase table。



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-3165) System table integrity check and repair tool

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-3165:
---
Fix Version/s: (was: 5.1.2)

> System table integrity check and repair tool
> 
>
> Key: PHOENIX-3165
> URL: https://issues.apache.org/jira/browse/PHOENIX-3165
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Andrew Kyle Purtell
>Assignee: Xinyi Yan
>Priority: Critical
>  Labels: phoenix-hardening
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When the Phoenix system tables become corrupt recovery is a painstaking 
> process of low level examination of table contents and manipulation of same 
> with the HBase shell. This is very difficult work providing no margin of 
> safety, and is a critical gap in terms of usability.
> At the OS level, we have fsck.
> At the HDFS level, we have fsck (integrity checking only, though)
> At the HBase level, we have hbck. 
> At the Phoenix level, we lack a system table repair tool. 
> Implement a tool that:
> - Does not depend on the Phoenix client.
> - Supports integrity checking of SYSTEM tables. Check for the existence of 
> all required columns in entries. Check that entries exist for all Phoenix 
> managed tables (implies Phoenix should add supporting advisory-only metadata 
> to the HBase table schemas). Check that serializations are valid. 
> - Supports complete repair of SYSTEM.CATALOG and recreation, if necessary, of 
> other tables like SYSTEM.STATS which can be dropped to recover from an 
> emergency. We should be able to drop SYSTEM.CATALOG (or any other SYSTEM 
> table), run the tool, and have a completely correct recreation of 
> SYSTEM.CATALOG available at the end of its execution.
> - To the extent we have or introduce cross-system-table invariants, check 
> them and offer a repair or reconstruction option.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5574) Disallow creating index when index.region.observer.enabled flag is false and base table is loaded with IRO coproc

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5574:
---
Fix Version/s: (was: 5.1.2)

> Disallow creating index when index.region.observer.enabled flag is false and 
> base table is loaded with IRO coproc
> -
>
> Key: PHOENIX-5574
> URL: https://issues.apache.org/jira/browse/PHOENIX-5574
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0, 4.14.3
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> Disallow creating index when index.region.observer.enabled flag is false and 
> base table is loaded with IRO coproc



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5587) Update documentation for secondary indexes

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5587:
---
Fix Version/s: (was: 5.1.2)

> Update documentation for secondary indexes
> --
>
> Key: PHOENIX-5587
> URL: https://issues.apache.org/jira/browse/PHOENIX-5587
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.1.0, 4.14.3
>Reporter: Geoffrey Jacoby
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> Phoenix 4.14.3 and 4.15 (and the forthcoming 5.1) have a major revamp of the 
> secondary index framework, which requires manual upgrade steps on the part of 
> operators. These need to be documented in the Phoenix website docs. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5274) ConnectionQueryServiceImpl#ensureNamespaceCreated and ensureTableCreated should use HBase APIs that do not require ADMIN permissions for existence checks

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5274:
---
Fix Version/s: (was: 5.1.2)

> ConnectionQueryServiceImpl#ensureNamespaceCreated and ensureTableCreated 
> should use HBase APIs that do not require ADMIN permissions for existence 
> checks
> -
>
> Key: PHOENIX-5274
> URL: https://issues.apache.org/jira/browse/PHOENIX-5274
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0, 4.14.2
>Reporter: Chinmay Kulkarni
>Assignee: Ankit Jain
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
> Attachments: PHOENIX-5274.4.x-HBase-1.5.v1.patch, 
> PHOENIX-5274.4.x-HBase-1.5.v2.patch, PHOENIX-5274.4.x-HBase-1.5.v3.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> [HBASE-22377|https://issues.apache.org/jira/browse/HBASE-22377] will 
> introduce a new API that does not require ADMIN permissions to check the 
> existence of a namespace.
> Currently, CQSI#ensureNamespaceCreated calls 
> HBaseAdmin#getNamespaceDescriptor which eventually on the server causes a 
> call to AccessController#preGetNamespaceDescriptor. This tries to acquire 
> ADMIN permissions on the namespace. We should ideally use the new API 
> provided by HBASE-22377 which does not require the phoenix client to get 
> ADMIN permissions on the namespace. We should acquire ADMIN permissions only 
> in case we need to create the namespace if it doesn't already exist.
> Similarly, CQSI#ensureTableCreated should first check the existence of a 
> table before trying to do HBaseAdmin#getTableDescriptor since this requires 
> CREATE and ADMIN permissions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5518) Unittests for global index read repair count

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5518:
---
Fix Version/s: (was: 5.1.2)

> Unittests for global index read repair count
> 
>
> Key: PHOENIX-5518
> URL: https://issues.apache.org/jira/browse/PHOENIX-5518
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>Assignee: Kadir OZDEMIR
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> [~kadir] and I were tracking down a scenario where the read repair kept 
> increasing.
> It turned out not to be a bug, but we realized that there is not test that 
> checks whether the read repair count is as expected as correctness is 
> guaranteed in any case.
> So let's add a test case based on the read repairs metric we added some time 
> back.
> I will not have time to work in, just filing in case somebody has.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5339) Refactor IndexBuildManager and IndexBuilder to eliminate usage of Pair

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5339:
---
Fix Version/s: (was: 5.1.2)

> Refactor IndexBuildManager and IndexBuilder to eliminate usage of Pair
> --
>
> Key: PHOENIX-5339
> URL: https://issues.apache.org/jira/browse/PHOENIX-5339
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Kadir OZDEMIR
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> Some methods of the IndexBuildManager and IndexBuilder classes return a 
> collection of pairs or triplets (implemented as pairs of pair and single). 
> This makes the Indexer and especially IndexRegionObserver code difficult to 
> read. We can replace the pair structures with a class, say IndexUpdate to 
> make the code more readable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5258) Add support to parse header from the input CSV file as input columns for CsvBulkLoadTool

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5258:
---
Fix Version/s: (was: 5.1.2)

> Add support to parse header from the input CSV file as input columns for 
> CsvBulkLoadTool
> 
>
> Key: PHOENIX-5258
> URL: https://issues.apache.org/jira/browse/PHOENIX-5258
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Prashant Vithani
>Assignee: Prashant Vithani
>Priority: Minor
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
> Attachments: PHOENIX-5258-4.x-HBase-1.4.001.patch, 
> PHOENIX-5258-4.x-HBase-1.4.patch, PHOENIX-5258-master.001.patch, 
> PHOENIX-5258-master.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently, CsvBulkLoadTool does not support reading header from the input csv 
> and expects the content of the csv to match with the table schema. The 
> support for the header can be added to dynamically map the schema with the 
> header.
> The proposed solution is to introduce another option for the tool 
> `–parse-header`. If this option is passed, the input columns list is 
> constructed by reading the first line of the input CSV file.
>  * If there is only one file, read the header from the first line and 
> generate the `ColumnInfo` list.
>  * If there are multiple files, read the header from all the files, and throw 
> an error if the headers across files do not match.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5648) Improve IndexScrutinyTool's performance by moving comparison logic to server side

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5648:
---
Fix Version/s: (was: 5.1.2)

> Improve IndexScrutinyTool's performance by moving comparison logic to server 
> side
> -
>
> Key: PHOENIX-5648
> URL: https://issues.apache.org/jira/browse/PHOENIX-5648
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0, 4.14.3
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Minor
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> If IndexScrutinyTool runs on a table with billion rows, it takes lots of 
> time. 
> One of the ways to improve the tool is to move the comparison to the 
> server-side. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5117) Return the count of rows scanned in HBase

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5117:
---
Fix Version/s: (was: 5.1.2)

> Return the count of rows scanned in HBase
> -
>
> Key: PHOENIX-5117
> URL: https://issues.apache.org/jira/browse/PHOENIX-5117
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.14.1
>Reporter: Chen Feng
>Assignee: Chen Feng
>Priority: Minor
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
> Attachments: PHOENIX-5117-4.x-HBase-1.4-v1.patch, 
> PHOENIX-5117-4.x-HBase-1.4-v2.patch, PHOENIX-5117-4.x-HBase-1.4-v3.patch, 
> PHOENIX-5117-4.x-HBase-1.4-v4.patch, PHOENIX-5117-4.x-HBase-1.4-v5.patch, 
> PHOENIX-5117-4.x-HBase-1.4-v6.patch, PHOENIX-5117-v1.patch
>
>
> HBASE-5980 provides the ability to return the number of rows scanned. Such 
> metrics should also be returned by Phoenix.
> HBASE-21815 is acquired.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5497) When dropping a view, use the PTable for generating delete mutations for links rather than scanning SYSTEM.CATALOG

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5497:
---
Fix Version/s: (was: 5.1.2)

> When dropping a view, use the PTable for generating delete mutations for 
> links rather than scanning SYSTEM.CATALOG
> --
>
> Key: PHOENIX-5497
> URL: https://issues.apache.org/jira/browse/PHOENIX-5497
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> When dropping a view, we should generate the delete markers for the 
> parent->child links using the view and parent's PTable rather than by issuing 
> a scan on SYSTEM.CATALOG (see 
> [this|https://github.com/apache/phoenix/blob/207ab526ee511a19ac287f61fbd2cef268c5038d/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2310]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5400) Table name while selecting index state is case sensitive

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5400:
---
Fix Version/s: (was: 5.1.2)

> Table name while selecting index state is case sensitive
> 
>
> Key: PHOENIX-5400
> URL: https://issues.apache.org/jira/browse/PHOENIX-5400
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0, 4.14.2
>Reporter: Ashutosh Parekh
>Assignee: Swaroopa Kadam
>Priority: Minor
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> Initially, the following query is executed:
>  
> {code:java}
> CREATE TABLE IF NOT EXISTS us_population (
>  state CHAR(2) NOT NULL,
>  city VARCHAR NOT NULL,
>  population BIGINT,
>  CONSTRAINT my_pk PRIMARY KEY (state, city)) COLUMN_ENCODED_BYTES=0;
> UPSERT INTO us_population VALUES('NY','New York',8143197);
> UPSERT INTO us_population VALUES('CA','Los Angeles',3844829);
> UPSERT INTO us_population VALUES('IL','Chicago',2842518);
> UPSERT INTO us_population VALUES('TX','Houston',2016582);
> UPSERT INTO us_population VALUES('PA','Philadelphia',1463281);
> UPSERT INTO us_population VALUES('AZ','Phoenix',1461575);
> UPSERT INTO us_population VALUES('TX','San Antonio',1256509);
> UPSERT INTO us_population VALUES('CA','San Diego',1255540);
> UPSERT INTO us_population VALUES('TX','Dallas',1213825);
> UPSERT INTO us_population VALUES('CA','San Jose',912332);
> CREATE VIEW us_population_global_view (name VARCHAR,
>  age BIGINT) AS
> SELECT * FROM us_population
> WHERE state = 'CA';
> CREATE INDEX us_population_gv_gi_1 ON us_population_global_view(age) include 
> (city) async;
> {code}
>  
> Then,
> {code:java}
> org.apache.phoenix.mapreduce.index.automation.PhoenixMRJobSubmitter{code}
> is run.
> After that, The following queries then lead to a different output:
> {code:java}
> SELECT INDEX_STATE FROM SYSTEM.CATALOG WHERE 
> TABLE_NAME='us_population_gv_gi_1';{code}
> Output:
> {code:java}
> +--+
> | INDEX_STATE |
> +--+
> +--+
> No rows selected (0.076 seconds){code}
> and
> {code:java}
> SELECT INDEX_STATE FROM SYSTEM.CATALOG WHERE 
> TABLE_NAME='US_POPULATION_GV_GI_1';{code}
> Output:
> {code:java}
> +--+
> | INDEX_STATE |
> +--+
> | b |
> | |
> | |
> | |
> | |
> +--+
> 5 rows selected (0.063 seconds){code}
> Only the case in which the table is mentioned in different in the above 
> queries.
> Need an appropriate resolution for this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5283) Add CASCADE INDEX ALL in the SQL Grammar of ALTER TABLE ADD

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5283:
---
Fix Version/s: (was: 5.1.2)

> Add CASCADE INDEX ALL in the SQL Grammar of ALTER TABLE ADD 
> 
>
> Key: PHOENIX-5283
> URL: https://issues.apache.org/jira/browse/PHOENIX-5283
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
> Attachments: PHOENIX-5283.4.x-hbase-1.3.v1.patch
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Include following support in the grammar. 
> ALTER TABLE ADD CASCADE <(comma separated list of indexes) | ALL > IF NOT 
> EXISTS  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5140) TableNotFoundException occurs when we create local asynchronous index

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5140:
---
Fix Version/s: (was: 5.1.2)

> TableNotFoundException occurs when we create local asynchronous index
> -
>
> Key: PHOENIX-5140
> URL: https://issues.apache.org/jira/browse/PHOENIX-5140
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
> Environment: > HDP : 3.0.0.0, HBase : 2.0.0,phoenix : 5.0.0 and 
> hadoop : 3.1.0
>Reporter: MariaCarrie
>Assignee: dan zheng
>Priority: Major
>  Labels: IndexTool, localIndex, tableUndefined
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
> Attachments: PHOENIX-5140-master-v1.patch, 
> PHOENIX-5140-master-v2.patch
>
>   Original Estimate: 48h
>  Time Spent: 20m
>  Remaining Estimate: 47h 40m
>
> First I create the table and insert the data:
> ^create table DMP.DMP_INDEX_TEST2 (id varchar not null primary key,name 
> varchar,age varchar);^
> ^upsert into DMP.DMP_INDEX_TEST2 values('id01','name01','age01');^
> The asynchronous index is then created:
> ^create local index if not exists TMP_INDEX_DMP_TEST2 on DMP.DMP_INDEX_TEST2 
> (name) ASYNC;^
> Because kerberos is enabled,So I need kinit HBase principal first,Then 
> execute the following command:
> ^HADOOP_CLASSPATH="/etc/hbase/conf" hadoop jar 
> /usr/hdp/3.0.0.0-1634/phoenix/phoenix-client.jar 
> org.apache.phoenix.mapreduce.index.IndexTool --schema DMP --data-table 
> DMP_INDEX_TEST2 --index-table TMP_INDEX_DMP_TEST2 --output-path 
> /hbase-backup2^
> But I got the following error:
> ^Error: java.lang.RuntimeException: 
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=DMP.DMP_INDEX_TEST2^
> ^at 
> org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:124)^
> ^at 
> org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:50)^
> ^at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)^
> ^at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)^
> ^at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)^
> ^at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)^
> ^at java.security.AccessController.doPrivileged(Native Method)^
> ^at javax.security.auth.Subject.doAs(Subject.java:422)^
> ^at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)^
> ^at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)^
> ^Caused by: org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 
> (42M03): Table undefined. tableName=DMP.DMP_INDEX_TEST2^
> ^at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTableRegionLocation(ConnectionQueryServicesImpl.java:4544)^
> ^at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.getTableRegionLocation(DelegateConnectionQueryServices.java:312)^
> ^at 
> org.apache.phoenix.compile.UpsertCompiler.setValues(UpsertCompiler.java:163)^
> ^at 
> org.apache.phoenix.compile.UpsertCompiler.access$500(UpsertCompiler.java:118)^
> ^at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertValuesMutationPlan.execute(UpsertCompiler.java:1202)^
> ^at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)^
> ^at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)^
> ^at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)^
> ^at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)^
> ^at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)^
> ^at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173)^
> ^at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:183)^
> ^at 
> org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:103)^
> ^... 9 more^
> I can query this table and have access to it,It works well:
> ^select * from DMP.DMP_INDEX_TEST2;^
> ^select * from DMP.TMP_INDEX_DMP_TEST2;^
> ^drop table DMP.DMP_INDEX_TEST2;^
> But why did my MR task make this mistake? Any Suggestions from anyone?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5495) Avoid server-server RPCs when row locks are held inside MetaDataEndpointImpl

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5495:
---
Fix Version/s: (was: 5.1.2)

> Avoid server-server RPCs when row locks are held inside MetaDataEndpointImpl
> 
>
> Key: PHOENIX-5495
> URL: https://issues.apache.org/jira/browse/PHOENIX-5495
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> At various spots in MetaDataEndpointImpl, we acquire row locks and then while 
> locks are held, do server-to-server RPC calls. This can lead to lock 
> starvation if RPCs take too long. We should decouple such interactions as 
> much as possible.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5404) Move check to client side to see if there are any child views that need to be dropped while receating a table/view

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5404:
---
Fix Version/s: (was: 5.1.2)

> Move check to client side to see if there are any child views that need to be 
> dropped while receating a table/view
> --
>
> Key: PHOENIX-5404
> URL: https://issues.apache.org/jira/browse/PHOENIX-5404
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Thomas D'Silva
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> Remove {{ViewUtil.dropChildViews(env, tenantIdBytes, schemaName, 
> tableName);}} call in MetdataEndpointImpl.createTable
> While creating a table or view we need to ensure that are not any child views 
> that haven't been clean up by the DropChildView task yet. Move this check to 
> the client (issue a scan against SYSTEM.CHILD_LINK to see if a single linking 
> row exists).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5534) Cursors With Request Metrics Enabled Throws Exception

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5534:
---
Fix Version/s: (was: 5.1.2)

> Cursors With Request Metrics Enabled Throws Exception
> -
>
> Key: PHOENIX-5534
> URL: https://issues.apache.org/jira/browse/PHOENIX-5534
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0, 4.14.1, 4.14.2, 4.14.3
>Reporter: Daniel Wong
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> Metrics are attempted to be setup twice during this path which causes an 
> exception to be thrown.
> Recreation:
> Adding 
>  
> {code:java}
> props.put("phoenix.query.request.metrics.enabled","true");
>  
> {code}
> To the CursorLifecycleCompile() method in CursorCompilerTest and running the 
> test.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5653) Documentation updates for Update Cache Frequency

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5653:
---
Fix Version/s: (was: 5.1.2)

> Documentation updates for Update Cache Frequency
> 
>
> Key: PHOENIX-5653
> URL: https://issues.apache.org/jira/browse/PHOENIX-5653
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0
>Reporter: Nitesh Maheshwari
>Assignee: Nitesh Maheshwari
>Priority: Major
>  Labels: Documentation
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> The documentation for the config parameter 
> phoenix.default.update.cache.frequency is not available on the Configuration 
> page on the website. Also, the existing documentation for 'Update Cache 
> Frequency' should be updated on the Grammar page on the website. Specifically 
> the precedence order that it follows should be mentioned i.e.
> Table-level property > Connection-level property > Default value.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5338) Test the empty column

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5338:
---
Fix Version/s: (was: 5.1.2)

> Test the empty column
> -
>
> Key: PHOENIX-5338
> URL: https://issues.apache.org/jira/browse/PHOENIX-5338
> Project: Phoenix
>  Issue Type: Test
>Reporter: Kadir OZDEMIR
>Assignee: Jacob Isaac
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> Every Phoenix table includes a shadow column that is called empty column. We 
> need an integration test to verify the following properties of the empty 
> column:
>  # Every Phoenix table (data or index) should have the empty column
>  # Every HBase mutation (full or partial row) for a Phoenix table should 
> include the empty column cell
>  # Removing/adding columns from/to a Phoenix table should not impact the 
> above empty column properties



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5632) Add more information to SYSTEM.TASK TASK_DATA field apart from the task status

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5632:
---
Fix Version/s: (was: 5.1.2)

> Add more information to SYSTEM.TASK TASK_DATA field apart from the task status
> --
>
> Key: PHOENIX-5632
> URL: https://issues.apache.org/jira/browse/PHOENIX-5632
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Minor
>  Labels: beginner, newbie
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> It would be helpful for debugging if we could add more information to the 
> TASK_DATA json that is upserted into SYSTEM.TASK apart from just the task 
> status. For example, in failures cases, perhaps we can add the stack trace 
> for the failing task.
>  
> Ideas:
>  * Stacktrace in case of error
>  * Time taken for task to complete
>  * Name(s) of deleted child view(s)/table(s) per task
>  * Task_type column is represented by int; may be useful to include task type 
> in task_data column



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5649) IndexScrutinyTool is very slow on view-indexes

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5649:
---
Fix Version/s: (was: 5.1.2)

> IndexScrutinyTool is very slow on view-indexes
> --
>
> Key: PHOENIX-5649
> URL: https://issues.apache.org/jira/browse/PHOENIX-5649
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.3
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> From view-index to view, it scrutinizes about 7 rows per minute with batch 
> size 1. 
> From view to view-index, it is about 1000 rows per minute with batch size 1, 
> which is also very slow. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5803) Add unit testing for classes changed in PHOENIX-5801 and PHOENIX-5802

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5803:
---
Fix Version/s: (was: 5.1.2)

> Add unit testing for classes changed in PHOENIX-5801 and PHOENIX-5802
> -
>
> Key: PHOENIX-5803
> URL: https://issues.apache.org/jira/browse/PHOENIX-5803
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Major
>  Labels: phoenix-hardening
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> WhereConstantParser should be in the util package rather than coprocessor.
> We should also refactor, remove anonymous classes, etc. in 
> BaseResultIterators, MutatingResultIteratorFactory, UpsertCompiler, etc.
> Also need to add unit tests for all these classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5682) IndexTool can just update empty_column with verified if rest of index row matches

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5682:
---
Fix Version/s: (was: 5.1.2)

> IndexTool can just update empty_column with verified if rest of index row 
> matches
> -
>
> Key: PHOENIX-5682
> URL: https://issues.apache.org/jira/browse/PHOENIX-5682
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.1, 4.14.3
>Reporter: Priyank Porwal
>Priority: Minor
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> When upgrading from old indexing design to new consistent indexing, 
> IndexUpgradeTool kicks off IndexTool to rebuild the index. This index rebuild 
> rewrites all index rows. If any index row was already consistent, it is 
> rewritten + empty_column is updated with verified flag. 
> IndexTool could potentially just update empty_column if rest of the index row 
> matches with data rows. This would save the massive writes to the underlying 
> dfs, as well as other side effects of these writes to replication.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5685) PDataTypeFactory Singleton is not thread safe

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5685:
---
Fix Version/s: (was: 5.1.2)

> PDataTypeFactory Singleton is not thread safe
> -
>
> Key: PHOENIX-5685
> URL: https://issues.apache.org/jira/browse/PHOENIX-5685
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Minor
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> The singleton class uses lazy initialization of the INSTANCE variable, 
> however, the PDataTypeFactory#getInstance method is not synchronized to be 
> thread-safe. Good to make use of double-checked locking. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5686) MetaDataUtil#isLocalIndex returns incorrect results

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5686:
---
Fix Version/s: (was: 5.1.2)

> MetaDataUtil#isLocalIndex returns incorrect results
> ---
>
> Key: PHOENIX-5686
> URL: https://issues.apache.org/jira/browse/PHOENIX-5686
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Swaroopa Kadam
>Priority: Minor
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> isLocalIndex function in MetaDataUtil uses 
> "_LOCAL_IDX_" to check if the index is a local index. It would be good to 
> modify the method to use correct logic (get rid of the old and unused code) 
> and use the method call wherever needed. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6085) Remove duplicate calls to getSysMutexPhysicalTableNameBytes() during the upgrade path

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-6085:
---
Fix Version/s: (was: 5.1.2)

> Remove duplicate calls to getSysMutexPhysicalTableNameBytes() during the 
> upgrade path
> -
>
> Key: PHOENIX-6085
> URL: https://issues.apache.org/jira/browse/PHOENIX-6085
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Richárd Antal
>Priority: Minor
>  Labels: phoenix-hardening, quality-improvement
> Fix For: 4.17.0, 5.2.0
>
> Attachments: PHOENIX-6085.4.x.v1.patch, PHOENIX-6085.master.v1.patch
>
>
> We already make this call inside 
> [CQSI.acquireUpgradeMutex()|https://github.com/apache/phoenix/blob/1922895dfe5960dc025709b04acfaf974d3959dc/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L4220]
>  and then call writeMutexCell() which calls this again 
> [here|https://github.com/apache/phoenix/blob/1922895dfe5960dc025709b04acfaf974d3959dc/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L4244].
>  
> We should move this to inside writeMutexCell() itself and throw 
> UpgradeInProgressException if required there to avoid unnecessary expensive 
> HBase admin API calls.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5750) Upsert on immutable table fails with AccessDeniedException

2021-06-07 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5750:
---
Fix Version/s: (was: 5.1.2)

> Upsert on immutable table fails with AccessDeniedException
> --
>
> Key: PHOENIX-5750
> URL: https://issues.apache.org/jira/browse/PHOENIX-5750
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 4.14.3
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
> Attachments: PHOENIX-5750.4.x-HBase-1.3.v1.patch, 
> PHOENIX-5750.4.x-HBase-1.3.v2.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {code:java}
> // code placeholder
> In TableDDLPermissionsIT
> @Test
> public void testUpsertIntoImmutableTable() throws Throwable {
> startNewMiniCluster();
> final String schema = "TEST_INDEX_VIEW";
> final String tableName = "TABLE_DDL_PERMISSION_IT";
> final String phoenixTableName = schema + "." + tableName;
> grantSystemTableAccess();
> try {
> superUser1.runAs(new PrivilegedExceptionAction() {
> @Override
> public Void run() throws Exception {
> try {
> verifyAllowed(createSchema(schema), superUser1);
> verifyAllowed(onlyCreateTable(phoenixTableName), 
> superUser1);
> } catch (Throwable e) {
> if (e instanceof Exception) {
> throw (Exception)e;
> } else {
> throw new Exception(e);
> }
> }
> return null;
> }
> });
> if (isNamespaceMapped) {
> grantPermissions(unprivilegedUser.getShortName(), schema, 
> Action.WRITE, Action.READ,Action.EXEC);
> }
> // we should be able to read the data from another index as well to 
> which we have not given any access to
> // this user
> verifyAllowed(upsertRowsIntoTable(phoenixTableName), 
> unprivilegedUser);
> } finally {
> revokeAll();
> }
> }
> in BasePermissionsIT:
> AccessTestAction onlyCreateTable(final String tableName) throws SQLException {
> return new AccessTestAction() {
> @Override
> public Object run() throws Exception {
> try (Connection conn = getConnection(); Statement stmt = 
> conn.createStatement()) {
> assertFalse(stmt.execute("CREATE IMMUTABLE TABLE " + tableName
> + "(pk INTEGER not null primary key, data VARCHAR, 
> val integer)"));
> }
> return null;
> }
> };
> }
> AccessTestAction upsertRowsIntoTable(final String tableName) throws 
> SQLException {
> return new AccessTestAction() {
> @Override
> public Object run() throws Exception {
> try (Connection conn = getConnection()) {
> try (PreparedStatement pstmt = conn.prepareStatement(
> "UPSERT INTO " + tableName + " values(?, ?, ?)")) {
> for (int i = 0; i < NUM_RECORDS; i++) {
> pstmt.setInt(1, i);
> pstmt.setString(2, Integer.toString(i));
> pstmt.setInt(3, i);
> assertEquals(1, pstmt.executeUpdate());
> }
> }
> conn.commit();
> }
> return null;
> }
> };
> }{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-6471) Revert PHOENIX-5387 to remove unneeded CPL 1.0 license

2021-05-15 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-6471.

Fix Version/s: 5.1.2
   4.16.1
   Resolution: Fixed

Thanks [~vjasani] for the review and merging it.

> Revert PHOENIX-5387 to remove unneeded CPL 1.0 license
> --
>
> Key: PHOENIX-6471
> URL: https://issues.apache.org/jira/browse/PHOENIX-6471
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.16.1
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.16.1, 5.1.2
>
>
> As [~jmclean] pointed on release vote [1] and after checking the code , we 
> don't include any source or bundle the below dependency requiring CPL license 
> in any of the artifacts we distribute, so we shoud be avoiding it's 
> unnecessary mention in our LICENSE for both source and convenience binaries 
> as per the ASF policy [2][3]
> {code:java}
>  
> com.github.stefanbirkner
> system-rules
> 1.8.0
> 
>  {code}
> [1] 
> [https://lists.apache.org/x/thread.html/r903c814a13acf996309b9996262b14d85d219aaace22a39ea7233ef1@%3Cdev.phoenix.apache.org%3E]
> [2][https://www.apache.org/legal/resolved.html#category-b]
> [3][https://infra.apache.org/licensing-howto.html#guiding]
> FYI [~yanxinyi], [~vjasani]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6471) Revert PHOENIX-5387 to remove unneeded CPL 1.0 license

2021-05-14 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal reassigned PHOENIX-6471:
--

Assignee: Ankit Singhal

> Revert PHOENIX-5387 to remove unneeded CPL 1.0 license
> --
>
> Key: PHOENIX-6471
> URL: https://issues.apache.org/jira/browse/PHOENIX-6471
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.16.1
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Blocker
>
> As [~jmclean] pointed on release vote [1] and after checking the code , we 
> don't include any source or bundle the below dependency requiring CPL license 
> in any of the artifacts we distribute, so we shoud be avoiding it's 
> unnecessary mention in our LICENSE for both source and convenience binaries 
> as per the ASF policy [2][3]
> {code:java}
>  
> com.github.stefanbirkner
> system-rules
> 1.8.0
> 
>  {code}
> [1] 
> [https://lists.apache.org/x/thread.html/r903c814a13acf996309b9996262b14d85d219aaace22a39ea7233ef1@%3Cdev.phoenix.apache.org%3E]
> [2][https://www.apache.org/legal/resolved.html#category-b]
> [3][https://infra.apache.org/licensing-howto.html#guiding]
> FYI [~yanxinyi], [~vjasani]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6471) Revert PHOENIX-5387 to remove unneeded CPL 1.0 license

2021-05-14 Thread Ankit Singhal (Jira)
Ankit Singhal created PHOENIX-6471:
--

 Summary: Revert PHOENIX-5387 to remove unneeded CPL 1.0 license
 Key: PHOENIX-6471
 URL: https://issues.apache.org/jira/browse/PHOENIX-6471
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.16.1
Reporter: Ankit Singhal


As [~jmclean] pointed on release vote [1] and after checking the code , we 
don't include any source or bundle the below dependency requiring CPL license 
in any of the artifacts we distribute, so we shoud be avoiding it's unnecessary 
mention in our LICENSE for both source and convenience binaries as per the ASF 
policy [2][3]
{code:java}
 
com.github.stefanbirkner
system-rules
1.8.0

 {code}
[1] 
[https://lists.apache.org/x/thread.html/r903c814a13acf996309b9996262b14d85d219aaace22a39ea7233ef1@%3Cdev.phoenix.apache.org%3E]

[2][https://www.apache.org/legal/resolved.html#category-b]

[3][https://infra.apache.org/licensing-howto.html#guiding]

FYI [~yanxinyi], [~vjasani]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5850) Set scan id for hbase side log.

2021-02-24 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5850:
---
Fix Version/s: (was: 5.1.0)
   5.2.0

> Set scan id for hbase side log.
> ---
>
> Key: PHOENIX-5850
> URL: https://issues.apache.org/jira/browse/PHOENIX-5850
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.14.3
>Reporter: Chen Feng
>Assignee: Chen Feng
>Priority: Minor
> Fix For: 4.14.4, 5.2.0
>
> Attachments: PHOENIX-5850-v1.patch
>
>
> Adding scan id can help finding slow queries effectively.
> It's helpful for debug and diagnose.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5419) Cleanup anonymous class in TracingQueryPlan

2021-02-24 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5419:
---
Fix Version/s: (was: 5.1.0)
   5.2.0

> Cleanup anonymous class in TracingQueryPlan
> ---
>
> Key: PHOENIX-5419
> URL: https://issues.apache.org/jira/browse/PHOENIX-5419
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Jain
>Assignee: Ankit Jain
>Priority: Minor
> Fix For: 5.2.0
>
> Attachments: PHOENIX-5419-4.x-HBase-1.3.patch, PHOENIX-5419.patch, 
> PHOENIX-5419.v2.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Cleanup anonymous class in TracingQueryPlan



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6089) Additional relocations for the 5.1.0 client

2021-02-24 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-6089:
---
Fix Version/s: (was: 5.1.0)
   5.1.1

> Additional relocations for the 5.1.0 client
> ---
>
> Key: PHOENIX-6089
> URL: https://issues.apache.org/jira/browse/PHOENIX-6089
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Major
> Fix For: 5.1.1
>
> Attachments: 6089-master.txt, 6089.txt
>
>
> I just update the Phoenix connector in Presto locally to work with Phoenix 
> 5.1.x.
> Among other things I relocate a bunch of more classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6119) UngroupedAggregateRegionObserver Malformed connection url Error thrown when using a zookeeper quorum

2021-02-24 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-6119:
---
Fix Version/s: (was: 5.1.0)
   5.2.0

> UngroupedAggregateRegionObserver Malformed connection url Error thrown when 
> using a zookeeper quorum
> 
>
> Key: PHOENIX-6119
> URL: https://issues.apache.org/jira/browse/PHOENIX-6119
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.0
>Reporter: Kyle R Stehbens
>Priority: Major
> Fix For: 5.2.0
>
> Attachments: PHOENIX-6119.master.v4.patch
>
>
> When using Phoenix with a HBase instance configured with a HA zookeeper 
> quorum URL like the following:
> hbase.zookeeper.quorum='zk1:2181,zk2:2181,zk3:2181'
> Phoenix throws exceptions when trying to collect statistics as follows:
> {noformat}
> 2020-09-09 21:19:45,806 INFO 
> [regionserver/regionserver1:16040-shortCompactions-0] util.QueryUtil: 
> Creating connection with the jdbc url: 
> jdbc:phoenix:zk1:2181,zk2:2181,zk3:2181:2181:/hbase;
>  2020-09-09 21:19:45,808 WARN 
> [regionserver/regionserver1:16040-shortCompactions-0] 
> coprocessor.UngroupedAggregateRegionObserver: Unable to collect stats for 
> test_namespace:test_table
>  java.io.IOException: java.sql.SQLException: ERROR 102 (08001): Malformed 
> connection url. :zk1:2181,zk2:2181,zk3:2181:2181:/hbase;
>  at 
> org.apache.phoenix.schema.stats.DefaultStatisticsCollector.init(DefaultStatisticsCollector.java:124)
>  at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver$5.run(UngroupedAggregateRegionObserver.java:1097)
>  at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver$5.run(UngroupedAggregateRegionObserver.java:1082)
>  at java.security.AccessController.doPrivileged(Native Method)
>  at javax.security.auth.Subject.doAs(Subject.java:422)
>  at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  at org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:517)
>  at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:498)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at org.apache.hadoop.hbase.util.Methods.call(Methods.java:40)
>  at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:192)
>  at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.preCompact(UngroupedAggregateRegionObserver.java:1081)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$8.call(RegionCoprocessorHost.java:656)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$8.call(RegionCoprocessorHost.java:652)
>  at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithResult.callObserver(CoprocessorHost.java:600)
>  at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:636)
>  at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperationWithResult(CoprocessorHost.java:614)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preCompact(RegionCoprocessorHost.java:650)
>  at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.postCompactScannerOpen(Compactor.java:288)
>  at 
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.compact(Compactor.java:317)
>  at 
> org.apache.hadoop.hbase.regionserver.compactions.DefaultCompactor.compact(DefaultCompactor.java:65)
>  at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreEngine$DefaultCompactionContext.compact(DefaultStoreEngine.java:126)
>  at org.apache.hadoop.hbase.regionserver.HStore.compact(HStore.java:1454)
>  at org.apache.hadoop.hbase.regionserver.HRegion.compact(HRegion.java:2260)
>  at 
> org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.doCompaction(CompactSplit.java:616)
>  at 
> org.apache.hadoop.hbase.regionserver.CompactSplit$CompactionRunner.run(CompactSplit.java:658)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748)
>  Caused by: java.sql.SQLException: ERROR 102 (08001): Malformed connection 
> url. :zk1:2181,zk2:2181,zk3:2181:2181:/hbase;
>  at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:570)
>  at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:195)
>  at 
> 

[jira] [Updated] (PHOENIX-6050) Set properties is invalid

2021-02-24 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-6050:
---
Fix Version/s: (was: 5.1.0)
   5.2.0

> Set properties is invalid
> -
>
> Key: PHOENIX-6050
> URL: https://issues.apache.org/jira/browse/PHOENIX-6050
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 5.0.0
> Environment: phoenix 4.13.1
> hbase 1.3.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Major
> Fix For: 5.2.0
>
> Attachments: PHOENIX-6050.master.001.patch, 
> PHOENIX-6050.master.002.patch
>
>
> I set properties in client, which are "phoenix.query.threadPoolSize", but 
> this is invalid.  ThreadPool always use default value (128). 
> code is:
> Properties properties = new Properties();
>  properties.setProperty("phoenix.query.threadPoolSize","300");
>  PropertiesResolve phoenixpr = new PropertiesResolve();
>  String phoenixdriver = 
> phoenixpr.readMapByKey("com/main/SyncData.properties", "phoenix_driver");
>  String phoenixjdbc = phoenixpr.readMapByKey("com/main/SyncData.properties", 
> "phoenix_jdbc");
>  Class.forName(phoenixdriver);
>  return DriverManager.getConnection(phoenixjdbc,properties);
> throw is:
> Error: Task 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893 rejected 
> from org.apache.phoenix.job.JobManager$1@26ae880a[Running, pool size = 128, 
> active threads = 128, queued tasks = 5000, completed tasks = 36647] 
> (state=08000,code=101)
> org.apache.phoenix.exception.PhoenixIOException: Task 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893 rejected 
> from org.apache.phoenix.job.JobManager$1@26ae880a[Running, pool size = 128, 
> active threads = 128, queued tasks = 5000, completed tasks = 36647]
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:120)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1024)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:916)
> ^Reason:^
> I find PhoenixDriver create threadpool before init config from properties. 
> when create threadpool ,  config is always default value .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5894) Table versus Table Full Outer join on Salted tables not working

2021-02-24 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5894:
---
Fix Version/s: (was: 5.1.0)
   5.1.1

> Table versus Table Full Outer join on Salted tables not working
> ---
>
> Key: PHOENIX-5894
> URL: https://issues.apache.org/jira/browse/PHOENIX-5894
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.0.0
>Reporter: Ben Cohen
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.1.1, 4.16.1, 4.17.0
>
> Attachments: PHOENIX-5894.patch, 
> Salt_Bucketed_Table_Full_Outer_Join_Issue.docx
>
>
> Trying to do a Full Outer Join on two tables. The operation works when the 
> tables are not salted but fails with an exception related to casting when 
> performed on the salted versions of the tables. Here is the exceptions:
> "java.lang.ClassCastException: org.apache.phoenix.schema.PColumnImpl cannot 
> be cast to org.apache.phoenix.schema.ProjectedColumn
>         at 
> org.apache.phoenix.compile.JoinCompiler.joinProjectedTables(JoinCompiler.java:1256)
>         at 
> org.apache.phoenix.compile.QueryCompiler.compileJoinQuery(QueryCompiler.java:425)
>         at 
> org.apache.phoenix.compile.QueryCompiler.compileJoinQuery(QueryCompiler.java:228)
>         at 
> org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:191)
>         at 
> org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:153)
>         at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:490)
>         at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:456)
>         at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:302)
>         at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:291)
>         at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>         at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:290)
>         at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:283)
>         at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1830)
>         at sqlline.Commands.execute(Commands.java:822)
>         at sqlline.Commands.sql(Commands.java:732)
>         at sqlline.SqlLine.dispatch(SqlLine.java:813)
>         at sqlline.SqlLine.begin(SqlLine.java:686)
>         at sqlline.SqlLine.start(SqlLine.java:398)
>         at sqlline.SqlLine.main(SqlLine.java:291)"
> I have attached a word document with the complete list of queries and their 
> results, along with commands to recreate the data.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5750) Upsert on immutable table fails with AccessDeniedException

2021-02-24 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5750:
---
Fix Version/s: (was: 5.1.0)
   5.1.1

> Upsert on immutable table fails with AccessDeniedException
> --
>
> Key: PHOENIX-5750
> URL: https://issues.apache.org/jira/browse/PHOENIX-5750
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 4.14.3
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Major
> Fix For: 5.1.1, 4.16.1, 4.17.0
>
> Attachments: PHOENIX-5750.4.x-HBase-1.3.v1.patch, 
> PHOENIX-5750.4.x-HBase-1.3.v2.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {code:java}
> // code placeholder
> In TableDDLPermissionsIT
> @Test
> public void testUpsertIntoImmutableTable() throws Throwable {
> startNewMiniCluster();
> final String schema = "TEST_INDEX_VIEW";
> final String tableName = "TABLE_DDL_PERMISSION_IT";
> final String phoenixTableName = schema + "." + tableName;
> grantSystemTableAccess();
> try {
> superUser1.runAs(new PrivilegedExceptionAction() {
> @Override
> public Void run() throws Exception {
> try {
> verifyAllowed(createSchema(schema), superUser1);
> verifyAllowed(onlyCreateTable(phoenixTableName), 
> superUser1);
> } catch (Throwable e) {
> if (e instanceof Exception) {
> throw (Exception)e;
> } else {
> throw new Exception(e);
> }
> }
> return null;
> }
> });
> if (isNamespaceMapped) {
> grantPermissions(unprivilegedUser.getShortName(), schema, 
> Action.WRITE, Action.READ,Action.EXEC);
> }
> // we should be able to read the data from another index as well to 
> which we have not given any access to
> // this user
> verifyAllowed(upsertRowsIntoTable(phoenixTableName), 
> unprivilegedUser);
> } finally {
> revokeAll();
> }
> }
> in BasePermissionsIT:
> AccessTestAction onlyCreateTable(final String tableName) throws SQLException {
> return new AccessTestAction() {
> @Override
> public Object run() throws Exception {
> try (Connection conn = getConnection(); Statement stmt = 
> conn.createStatement()) {
> assertFalse(stmt.execute("CREATE IMMUTABLE TABLE " + tableName
> + "(pk INTEGER not null primary key, data VARCHAR, 
> val integer)"));
> }
> return null;
> }
> };
> }
> AccessTestAction upsertRowsIntoTable(final String tableName) throws 
> SQLException {
> return new AccessTestAction() {
> @Override
> public Object run() throws Exception {
> try (Connection conn = getConnection()) {
> try (PreparedStatement pstmt = conn.prepareStatement(
> "UPSERT INTO " + tableName + " values(?, ?, ?)")) {
> for (int i = 0; i < NUM_RECORDS; i++) {
> pstmt.setInt(1, i);
> pstmt.setString(2, Integer.toString(i));
> pstmt.setInt(3, i);
> assertEquals(1, pstmt.executeUpdate());
> }
> }
> conn.commit();
> }
> return null;
> }
> };
> }{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6336) Scan filter is incorrectly set to null for index rebuilds

2021-02-24 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-6336:
---
Fix Version/s: (was: 4.16.0)
   (was: 5.1.0)

> Scan filter is incorrectly set to null for index rebuilds
> -
>
> Key: PHOENIX-6336
> URL: https://issues.apache.org/jira/browse/PHOENIX-6336
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.0, 4.16.0
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6320) support hbase profile param at the release script

2021-02-24 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-6320:
---
Fix Version/s: (was: 4.16.0)
   (was: 5.1.0)

> support hbase profile param at the release script
> -
>
> Key: PHOENIX-6320
> URL: https://issues.apache.org/jira/browse/PHOENIX-6320
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Major
>
> After PHOENIX-6307, we have the ability to release multi-hbase profiles at 
> one branch, but we need to provide an option for each release run. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6271) Effective DDL generated by SchemaExtractionTool should maintain the order of PK and other columns

2021-02-24 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-6271:
---
Fix Version/s: (was: 4.16.0)

> Effective DDL generated by SchemaExtractionTool should maintain the order of 
> PK and other columns
> -
>
> Key: PHOENIX-6271
> URL: https://issues.apache.org/jira/browse/PHOENIX-6271
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Minor
>
> SchemaExtractionTool is used to generate effective DDL which can be then 
> compared with the DDL on the cluster to perform schema monitoring. 
> This won't affect the monitoring part but would be good to have the PR order 
> in place so that effective DDL can be used for creating the entity for the 
> first time in a new environment.
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6118) Multi Tenant Workloads using PHERF

2021-02-24 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-6118:
---
Fix Version/s: (was: 4.16.0)

> Multi Tenant Workloads using PHERF
> --
>
> Key: PHOENIX-6118
> URL: https://issues.apache.org/jira/browse/PHOENIX-6118
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Features like PHOENIX_TTL and Splittable SYSCAT need to be tested for a large 
> number of tenant views.
> In the absence of support for creating a large number of tenant views - Multi 
> leveled views dynamically and be able to query them in a generic framework, 
> the teams have to write custom logic to replay/run functional and perf 
> testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6334) All map tasks should operate on the same restored snapshot

2021-02-24 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-6334:
---
Fix Version/s: (was: 4.x)
   (was: 4.16.0)
   (was: 5.1.0)

> All map tasks should operate on the same restored snapshot
> --
>
> Key: PHOENIX-6334
> URL: https://issues.apache.org/jira/browse/PHOENIX-6334
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.0.0, 4.14.3
>Reporter: Saksham Gangwar
>Assignee: Rushabh Shah
>Priority: Major
>
> Recently we switched an MR application from scanning live tables to scanning 
> snapshots (PHOENIX-3744). We ran into a severe performance issue, which 
> turned out to a correctness issue due to over-lapping scan splits generation. 
> After some debugging we figured that it has been fixed via PHOENIX-4997. 
> We also *need not restore the snapshot per map task*. The purpose of this 
> Jira is to correct that behavior. Currently, we restore the snapshot once per 
> map task into a temp directory. For large tables on big clusters, this 
> creates a storm of NN RPCs. We can do this once per job and let all the map 
> tasks operate on the same restored snapshot. HBase already did this via 
> HBASE-18806, we can do something similar.
>  
> All other performance suggestions here: 
> https://issues.apache.org/jira/browse/PHOENIX-6081
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6190) Race condition in view creation may allow conflicting changes for pre-4.15 clients and for scenarios with phoenix.allow.system.catalog.rollback=true

2021-02-24 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-6190:
---
Fix Version/s: (was: 4.16.0)
   (was: 5.1.0)

> Race condition in view creation may allow conflicting changes for pre-4.15 
> clients and for scenarios with phoenix.allow.system.catalog.rollback=true
> 
>
> Key: PHOENIX-6190
> URL: https://issues.apache.org/jira/browse/PHOENIX-6190
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Major
>
> For pre-4.15 clients and in scenarios where 
> phoenix.allow.system.catalog.rollback=true, we have to block adding/dropping 
> a column to/from a parent table/view as we no longer lock the parent on the 
> server side while creating a child view to prevent conflicting changes. This 
> is handled on the client side from 4.15 onwards.
> However, there is a slight race condition here where a view may be created 
> between the time we find all children of the parent and the time we do this 
> check (see 
> [this|https://github.com/apache/phoenix/blob/264310bd1e6c14996c3cfb11557fc66a012cb01b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2592]).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5346) SaltedIndexIT is flapping

2021-02-24 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5346:
---
Fix Version/s: (was: 4.16.0)
   (was: 5.1.0)
   5.2.0
   4.17.0

> SaltedIndexIT is flapping
> -
>
> Key: PHOENIX-5346
> URL: https://issues.apache.org/jira/browse/PHOENIX-5346
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: Lars Hofhansl
>Priority: Critical
>  Labels: disabled-test
> Fix For: 4.17.0, 5.2.0
>
>
> {code}
> [ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 14.042 s <<< FAILURE! - in org.apache.phoenix.end2end.index.SaltedIndexIT
> [ERROR] 
> testMutableTableIndexMaintanenceSaltedSalted(org.apache.phoenix.end2end.index.SaltedIndexIT)
>   Time elapsed: 4.661 s  <<< FAILURE!
> org.junit.ComparisonFailure: expected:<[y]> but was:<[x]>
>   at 
> org.apache.phoenix.end2end.index.SaltedIndexIT.testMutableTableIndexMaintanence(SaltedIndexIT.java:129)
>   at 
> org.apache.phoenix.end2end.index.SaltedIndexIT.testMutableTableIndexMaintanenceSaltedSalted(SaltedIndexIT.java:74)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5906) Use a recent version of maven-shade-plugin

2021-02-24 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5906:
---
Fix Version/s: (was: 4.16.0)
   4.17.0

> Use a recent version of maven-shade-plugin
> --
>
> Key: PHOENIX-5906
> URL: https://issues.apache.org/jira/browse/PHOENIX-5906
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.16.0
>Reporter: Andrew Kyle Purtell
>Assignee: Andrew Kyle Purtell
>Priority: Minor
> Fix For: 4.17.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The configuration of maven-shade-plugin doesn't require a particular version. 
> Depending on Maven version, the downloaded plugin may be too old to handle 
> Java 8 bytecodes, which may come in from Hadoop 3 among other places.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-4878) Remove SharedTableState and replace with PTable

2021-02-24 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4878:
---
Fix Version/s: (was: 4.16.0)
   4.17.0

> Remove SharedTableState and replace with PTable
> ---
>
> Key: PHOENIX-4878
> URL: https://issues.apache.org/jira/browse/PHOENIX-4878
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Thomas D'Silva
>Assignee: Xinyi Yan
>Priority: Minor
>  Labels: phoenix-hardening
> Fix For: 4.17.0
>
> Attachments: PHOENIX-4878.v2-4.x-HBase-1.3.patch, 
> PHOENIX-4878.v2-master.patch, PHOENIX-4878.v3-4.x-HBase-1.3.patch, 
> PHOENIX-4878.v3-master.patch, Screenshot from 2019-08-16 10-59-54.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When we drop a column from a base table we also drop view indexes that 
> require the column. This information is passed back to the client using the 
> SharedTableState proto. Convert this to use our regular PTable proto.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6085) Remove duplicate calls to getSysMutexPhysicalTableNameBytes() during the upgrade path

2021-02-24 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-6085:
---
Fix Version/s: (was: 4.16.0)
   (was: 5.1.0)
   5.2.0
   4.17.0

> Remove duplicate calls to getSysMutexPhysicalTableNameBytes() during the 
> upgrade path
> -
>
> Key: PHOENIX-6085
> URL: https://issues.apache.org/jira/browse/PHOENIX-6085
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Richárd Antal
>Priority: Minor
>  Labels: phoenix-hardening, quality-improvement
> Fix For: 4.17.0, 5.2.0
>
> Attachments: PHOENIX-6085.4.x.v1.patch, PHOENIX-6085.master.v1.patch
>
>
> We already make this call inside 
> [CQSI.acquireUpgradeMutex()|https://github.com/apache/phoenix/blob/1922895dfe5960dc025709b04acfaf974d3959dc/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L4220]
>  and then call writeMutexCell() which calls this again 
> [here|https://github.com/apache/phoenix/blob/1922895dfe5960dc025709b04acfaf974d3959dc/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L4244].
>  
> We should move this to inside writeMutexCell() itself and throw 
> UpgradeInProgressException if required there to avoid unnecessary expensive 
> HBase admin API calls.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6127) Prevent unnecessary HBase admin API calls in ViewUtil.getSystemTableForChildLinks() and act lazily instead

2021-02-24 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-6127:
---
Fix Version/s: (was: 4.16.0)
   (was: 5.1.0)
   4.17.0

> Prevent unnecessary HBase admin API calls in 
> ViewUtil.getSystemTableForChildLinks() and act lazily instead
> --
>
> Key: PHOENIX-6127
> URL: https://issues.apache.org/jira/browse/PHOENIX-6127
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Richárd Antal
>Priority: Major
>  Labels: phoenix-hardening, quality-improvement
> Fix For: 4.17.0
>
> Attachments: PHOENIX-6127.master.v1.patch
>
>
> In order to handle the case of older clients connecting to a 4.16 cluster 
> that has old metadata (no SYSTEM.CHILD_LINK table yet), we call 
> ViewUtil.getSystemTableForChildLinks() to figure out whether to use 
> SYSTEM.CHILD_LINK or SYSTEM.CATALOG to look up parent->child linking rows.
> Here we do HBase table existence checks using HBase admin APIs (see 
> [this|https://github.com/apache/phoenix/blob/e3c7b4bdce2524eb4fd1e7eb0ccd3454fcca81ce/phoenix-core/src/main/java/org/apache/phoenix/util/ViewUtil.java#L265-L269])
>  which can be avoided. In almost all cases once we've called this API, we 
> later go on and retrieve the Table object anyhow so we can instead try to 
> always get the SYSTEM.CHILD_LINK table and if that fails, try to get 
> SYSTEM.CATALOG. This will avoid additional admin API calls.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5833) Incorrect results with RVCs and AND operator

2021-02-24 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5833:
---
Fix Version/s: (was: 4.16.0)

> Incorrect results with RVCs and AND operator
> 
>
> Key: PHOENIX-5833
> URL: https://issues.apache.org/jira/browse/PHOENIX-5833
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.15.0
>Reporter: Bharath Vissapragada
>Assignee: Daniel Wong
>Priority: Critical
> Attachments: PHOENIX-5833.4.x.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Phoenix version: 4.15-HBase-1.5
> -- Create a test table and populate a couple of rows.
> {noformat}
> create table repro_bug(a varchar(10) not null, b varchar(10) not null, c 
> varchar(10) not null constraint pk primary key(a, b, c));
> upsert into repro_bug values('abc', 'def', 'RRSQ_IMKKL');
> upsert into repro_bug values('abc', 'def', 'RRS_ZYTDT');
> select * from repro_bug;
> +--+--+-+
> |  A   |  B   |  C  |
> +--+--+-+
> | abc  | def  | RRSQ_IMKKL  |
> | abc  | def  | RRS_ZYTDT   |
> +--+--+-+
> {noformat}
> -- Query 1 - Look for rows where C has a certain prefix - Returns correct 
> result
> {noformat}
> select A, B, C from REPRO_BUG where C like 'RRS\\_%';
> +--+--++
> |  A   |  B   | C  |
> +--+--++
> | abc  | def  | RRS_ZYTDT  |
> +--+--++
> {noformat}
> -- Query 2 - Look for rows where (a, b, c) > first row - Returns correct 
> result
> {noformat}
> select A, B, C from REPRO_BUG where (A, B, C) > ('abc', 'def', 'RRSQ_IMKKL')
> +--+--++
> |  A   |  B   | C  |
> +--+--++
> | abc  | def  | RRS_ZYTDT  |
> +--+--++
> {noformat}
> -- Query 3 - Combine the filters from Query 1 and Query2 - Returns incorrect 
> result.. Ideally it should return the same row as above.
> {noformat}
>  select A, B, C from REPRO_BUG where (A, B, C) > ('abc', 'def', 'RRSQ_IMKKL') 
> AND C like 'RRS\\_%';
> ++++
> | A  | B  | C  |
> ++++
> ++++
> {noformat}
> -- Explain for the above incase someone is interested.
> {noformat}
> explain select A, B, C from REPRO_BUG where (A, B, C) > ('abc', 'def', 
> 'RRSQ_IMKKL') AND C like 'RRS\\_%';
> ++-++--+
> |  PLAN   
>| EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
> ++-++--+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER REPRO_BUG 
> ['abcdef'] - [*]  | null| null   | null |
> | SERVER FILTER BY FIRST KEY ONLY AND C LIKE 'RRS\_%' 
>| null| null   | null |
> ++-++--+
> 2 rows selected (0.003 seconds)
> {noformat}
> I'm trying to poke around in the code to figure out the issue but my 
> understanding of  Phoenix is limited at this point. So creating a bug report 
> incase someone can figure this out quickly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-6393) Please tidy up Incubator releases

2021-02-24 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-6393.

Resolution: Fixed

> Please tidy up Incubator releases
> -
>
> Key: PHOENIX-6393
> URL: https://issues.apache.org/jira/browse/PHOENIX-6393
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sebb
>Assignee: Ankit Singhal
>Priority: Major
>
> The following directory trees appear to be obsolete, and should be removed 
> please:
> https://dist.apache.org/repos/dist/release/incubator/tephra/
> https://dist.apache.org/repos/dist/dev/incubator/tephra/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6393) Please tidy up Incubator releases

2021-02-24 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal reassigned PHOENIX-6393:
--

Assignee: Ankit Singhal

> Please tidy up Incubator releases
> -
>
> Key: PHOENIX-6393
> URL: https://issues.apache.org/jira/browse/PHOENIX-6393
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sebb
>Assignee: Ankit Singhal
>Priority: Major
>
> The following directory trees appear to be obsolete, and should be removed 
> please:
> https://dist.apache.org/repos/dist/release/incubator/tephra/
> https://dist.apache.org/repos/dist/dev/incubator/tephra/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6391) Please tidy up Incubator releases

2021-02-24 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal reassigned PHOENIX-6391:
--

Assignee: Ankit Singhal

> Please tidy up Incubator releases
> -
>
> Key: PHOENIX-6391
> URL: https://issues.apache.org/jira/browse/PHOENIX-6391
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sebb
>Assignee: Ankit Singhal
>Priority: Major
>
> The following directory trees appear to be obsolete, and should be deleted 
> please:
> https://dist.apache.org/repos/dist/release/incubator/omid/
> https://dist.apache.org/repos/dist/dev/incubator/omid/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-6391) Please tidy up Incubator releases

2021-02-24 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-6391.

Release Note: Done
  Resolution: Fixed

> Please tidy up Incubator releases
> -
>
> Key: PHOENIX-6391
> URL: https://issues.apache.org/jira/browse/PHOENIX-6391
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sebb
>Assignee: Ankit Singhal
>Priority: Major
>
> The following directory trees appear to be obsolete, and should be deleted 
> please:
> https://dist.apache.org/repos/dist/release/incubator/omid/
> https://dist.apache.org/repos/dist/dev/incubator/omid/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6331) Increase index retry from 1 to 2 incase of NotServingRegionException

2021-01-20 Thread Ankit Singhal (Jira)
Ankit Singhal created PHOENIX-6331:
--

 Summary: Increase index retry from 1 to 2 incase of 
NotServingRegionException
 Key: PHOENIX-6331
 URL: https://issues.apache.org/jira/browse/PHOENIX-6331
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal


Currently, we move the index to PENDING_DISABLE whenever the single write to 
index failed and carry out a retry at the client but this can be optimized for 
NotServingRegionException as Index regions can move very frequently depending 
on the balancer and one more retry at the server could avoid unnecessary 
handling of index states and retries at the client.

 
{code:java}
2021-01-20 06:54:58,682 WARN org.apache.hadoop.hbase.client.AsyncProcess: #277, 
table=, attempt=1/1 failed=1ops, last exception: 
org.apache.hadoop.hbase.NotServingRegionException: 
org.apache.hadoop.hbase.NotServingRegionException: Region  is not 
online on 
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2997)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1069)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2100)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
 on , tracking started Wed Jan 20 06:54:58 CET 2021; not 
retrying 1 - final failure
2021-01-20 06:54:58,690 INFO 
org.apache.phoenix.index.PhoenixIndexFailurePolicy: Successfully update 
INDEX_DISABLE_TIMESTAMP for  due to an exception while writing 
updates. indexState=PENDING_DISABLE
org.apache.phoenix.hbase.index.exception.MultiIndexWriteFailureException:  
disableIndexOnFailure=true, Failed to write to multiple index tables: 
[]
at 
org.apache.phoenix.hbase.index.write.TrackingParallelWriterIndexCommitter.write(TrackingParallelWriterIndexCommitter.java:236)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:195)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:156)
at 
org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:145)
at 
org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:617)
at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:577)
at 
org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:560)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1034)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1705)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1030)
at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3421)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2944)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2886)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:765)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:716)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2146)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
2021-01-20 06:54:58,691 INFO 
org.apache.phoenix.hbase.index.util.IndexManagementUtil: Rethrowing 
org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 1121 (XCL21): Write to the 
index failed.  disableIndexOnFailure=true, Failed to write to multiple index 
tables: [] ,serverTimestamp=1611122098649,
2021-01-20 06:55:01,296 INFO SecurityLogger.org.apache.hadoop.hbase.Server: 
Auth successful for hbase (auth:SIMPLE) {code}
 

 



--
This message 

[jira] [Created] (PHOENIX-6298) Use timestamp of PENDING_DISABLE_COUNT to calculate elapse time for PENDING_DISABLE state

2021-01-04 Thread Ankit Singhal (Jira)
Ankit Singhal created PHOENIX-6298:
--

 Summary: Use timestamp of PENDING_DISABLE_COUNT to calculate 
elapse time for PENDING_DISABLE state
 Key: PHOENIX-6298
 URL: https://issues.apache.org/jira/browse/PHOENIX-6298
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal


Instead of taking indexDisableTimestamp to calculate the elapsed time, we 
should be considering the last time we incr/decremented the counter for 
PENDING_DISABLE_COUNT. as if the application writes failures span more than the 
default threshold of 30 seconds, the index will unnecessarily get disabled even 
though the client could have retried and made it active.

{code}
long elapsedSinceDisable = 
EnvironmentEdgeManager.currentTimeMillis() - Math.abs(indexDisableTimestamp);

// on an index write failure, the server side transitions to PENDING_DISABLE, 
then the client
// retries, and after retries are exhausted, disables the 
index
if (indexState == PIndexState.PENDING_DISABLE) {
if (elapsedSinceDisable > pendingDisableThreshold) {
// too long in PENDING_DISABLE - client didn't 
disable the index, so we do it here
IndexUtil.updateIndexState(conn, 
indexTableFullName, PIndexState.DISABLE, indexDisableTimestamp);
}
continue;
}
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-4378) Unable to set KEEP_DELETED_CELLS to true on RS scanner

2020-11-10 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-4378.

Resolution: Duplicate

> Unable to set KEEP_DELETED_CELLS to true on RS scanner
> --
>
> Key: PHOENIX-4378
> URL: https://issues.apache.org/jira/browse/PHOENIX-4378
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Blocker
>  Labels: HBase-2.0
> Fix For: 5.1.0
>
>
> [~jamestaylor], 
> It seems we may need to fix PHOENIX-4277 differently for HBase 2.0 as we can 
> only update TTL and maxVersions now in preStoreScannerOpen and cannot return 
> a new StoreScanner with updated scanInfo.
> for reference:
> [1]https://issues.apache.org/jira/browse/PHOENIX-4318?focusedCommentId=16249943=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16249943



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (PHOENIX-4378) Unable to set KEEP_DELETED_CELLS to true on RS scanner

2020-11-10 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal reopened PHOENIX-4378:


> Unable to set KEEP_DELETED_CELLS to true on RS scanner
> --
>
> Key: PHOENIX-4378
> URL: https://issues.apache.org/jira/browse/PHOENIX-4378
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Blocker
>  Labels: HBase-2.0
> Fix For: 5.1.0
>
>
> [~jamestaylor], 
> It seems we may need to fix PHOENIX-4277 differently for HBase 2.0 as we can 
> only update TTL and maxVersions now in preStoreScannerOpen and cannot return 
> a new StoreScanner with updated scanInfo.
> for reference:
> [1]https://issues.apache.org/jira/browse/PHOENIX-4318?focusedCommentId=16249943=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16249943



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-4378) Unable to set KEEP_DELETED_CELLS to true on RS scanner

2020-11-10 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-4378.

Resolution: Fixed

> Unable to set KEEP_DELETED_CELLS to true on RS scanner
> --
>
> Key: PHOENIX-4378
> URL: https://issues.apache.org/jira/browse/PHOENIX-4378
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Blocker
>  Labels: HBase-2.0
> Fix For: 5.1.0
>
>
> [~jamestaylor], 
> It seems we may need to fix PHOENIX-4277 differently for HBase 2.0 as we can 
> only update TTL and maxVersions now in preStoreScannerOpen and cannot return 
> a new StoreScanner with updated scanInfo.
> for reference:
> [1]https://issues.apache.org/jira/browse/PHOENIX-4318?focusedCommentId=16249943=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16249943



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-6196) Update phoenix.mutate.maxSizeBytes to accept long values

2020-10-20 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-6196.

Fix Version/s: 4.16.0
   5.1.0
   Resolution: Fixed

> Update phoenix.mutate.maxSizeBytes to accept long values
> 
>
> Key: PHOENIX-6196
> URL: https://issues.apache.org/jira/browse/PHOENIX-6196
> Project: Phoenix
>  Issue Type: Task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
>
> Currently, the config "phoenix.mutate.maxSizeBytes" accepts value in Int, so 
> a user can only provide up to 2GB but there are some scenarios like UPSERT 
> SELECT from temp table with large rows, where the user may want to set more 
> value to it when auto-commit is off. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6196) Update phoenix.mutate.maxSizeBytes to accept long values

2020-10-19 Thread Ankit Singhal (Jira)
Ankit Singhal created PHOENIX-6196:
--

 Summary: Update phoenix.mutate.maxSizeBytes to accept long values
 Key: PHOENIX-6196
 URL: https://issues.apache.org/jira/browse/PHOENIX-6196
 Project: Phoenix
  Issue Type: Task
Reporter: Ankit Singhal
Assignee: Ankit Singhal


Currently, the config "phoenix.mutate.maxSizeBytes" accepts value in Int, so a 
user can only provide up to 2GB but there are some scenarios like UPSERT SELECT 
from temp table with large rows, where the user may want to set more value to 
it when auto-commit is off. 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-6034) Optimize InListIT

2020-08-25 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-6034.

Fix Version/s: 4.16.0
   5.1.0
   Resolution: Fixed

> Optimize InListIT
> -
>
> Key: PHOENIX-6034
> URL: https://issues.apache.org/jira/browse/PHOENIX-6034
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This is just an attempt to take one test from Lars 
> [list|[https://www.mail-archive.com/dev@phoenix.apache.org/msg57310.html]] 
> and improve the performance.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (TEPHRA-304) Remove Support for Java 7

2020-08-08 Thread Ankit Singhal (Jira)


[ 
https://issues.apache.org/jira/browse/TEPHRA-304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17173715#comment-17173715
 ] 

Ankit Singhal commented on TEPHRA-304:
--

{quote}This change means that the Phoenix 4.x branch cannot move to 0.16 ever, 
as it is stuck on Java 7.

Maybe reconsider this, and keep the test workarounds ?
{quote}
Agreed with [~stoty] , due to the dependency on HBase runtime, we can't upgrade 
JDK for 4.x branches of Phoenix. Considering Tephra being highly used in 
Phoenix and benefitted with subsequent release of Tephra so I also think we 
should reconsider this. And also it seems that the problem can be workaround by 
just adding an option to use TLS-1.2 protocol

> Remove Support for Java 7
> -
>
> Key: TEPHRA-304
> URL: https://issues.apache.org/jira/browse/TEPHRA-304
> Project: Phoenix Tephra
>  Issue Type: Improvement
>Reporter: Andreas Neumann
>Assignee: Andreas Neumann
>Priority: Major
> Fix For: 0.16.0-incubating
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Java 7 has long reached the end of its life support, yet Tephra still
> supports and tests with Java 7. After the recent change to use HTTPS, Java
> 7 causes repeated test failures, caused by its lack of support for TLS 1.2 (
> [https://central.sonatype.org/articles/2018/May/04/discontinued-support-for-tlsv11-and-below/])
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-6023) Wrong result when issuing query for an immutable table with multiple column families

2020-07-21 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-6023.

Fix Version/s: 4.16.0
   5.1.0
   Resolution: Fixed

> Wrong result when issuing query for an immutable table with multiple column 
> families
> 
>
> Key: PHOENIX-6023
> URL: https://issues.apache.org/jira/browse/PHOENIX-6023
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Steps to reproduce are as follows:
> 1. Create an immutable table with multiple column families:
> {code}
> 0: jdbc:phoenix:> CREATE TABLE TEST (
> . . . . . . . . >   ID VARCHAR PRIMARY KEY,
> . . . . . . . . >   A.COL1 VARCHAR,
> . . . . . . . . >   B.COL2 VARCHAR
> . . . . . . . . > ) IMMUTABLE_ROWS = TRUE;
> No rows affected (1.182 seconds)
> {code}
> 2. Upsert some rows:
> {code}
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id0', '0', 'a');
> 1 row affected (0.138 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id1', '1', NULL);
> 1 row affected (0.009 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id2', '2', 'b');
> 1 row affected (0.011 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id3', '3', NULL);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id4', '4', 'c');
> 1 row affected (0.006 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id5', '5', NULL);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id6', '6', 'd');
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id7', '7', NULL);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id8', '8', 'e');
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:> UPSERT INTO TEST VALUES ('id9', '9', NULL);
> 1 row affected (0.009 seconds)
> {code}
> 3. Count query is okay:
> {code}
> 0: jdbc:phoenix:> SELECT COUNT(COL1) FROM TEST WHERE COL2 IS NOT NULL;
> ++
> | COUNT(A.COL1)  |
> ++
> | 5  |
> ++
> 1 row selected (0.1 seconds)
> {code}
> 4. However, the following select query returns wrong result (it should return 
> 5 records):
> {code}
> 0: jdbc:phoenix:> SELECT COL1 FROM TEST WHERE COL2 IS NOT NULL;
> +---+
> | COL1  |
> +---+
> | 0 |
> | 1 |
> | 2 |
> | 3 |
> | 4 |
> | 5 |
> | 6 |
> | 7 |
> | 8 |
> | 9 |
> +---+
> 10 rows selected (0.058 seconds)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6034) Optimize InListIT

2020-07-21 Thread Ankit Singhal (Jira)
Ankit Singhal created PHOENIX-6034:
--

 Summary: Optimize InListIT
 Key: PHOENIX-6034
 URL: https://issues.apache.org/jira/browse/PHOENIX-6034
 Project: Phoenix
  Issue Type: Improvement
  Components: core
Reporter: Ankit Singhal
Assignee: Ankit Singhal


This is just an attempt to take one test from Lars 
[list|[https://www.mail-archive.com/dev@phoenix.apache.org/msg57310.html]] and 
improve the performance.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5884) Join query return empty result when filters for both the tables are present

2020-05-20 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5884:
---
Attachment: PHOENIX-5884.master.v3.patch

> Join query return empty result when filters for both the tables are present
> ---
>
> Key: PHOENIX-5884
> URL: https://issues.apache.org/jira/browse/PHOENIX-5884
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: PHOENIX-5884.master.v2.patch, 
> PHOENIX-5884.master.v3.patch, PHOENIX-5884_v1.patch
>
>
> Let's assume DDL to be same for both the tables involved in a join
> {code}
> CREATE TABLE LeftTable (id1 CHAR(6) NOT NULL,id2 VARCHAR(22) NOT 
> NULL,id3 VARCHAR(12) NOT NULL,id4 CHAR(2) NOT NULL,id5 CHAR(6) 
> NOT NULL, id6 VARCHAR(200) NOT NULL,id7 VARCHAR(50) NOT NULL,ts 
> TIMESTAMP ,CONSTRAINT PK_JOIN_AND_INTERSECTION_TABLE PRIMARY 
> KEY(id1,id2,id3,id4,id5,id6,id7))
> {code}
> Following query return right results
> {code}
> SELECT m.*,r.* FROM LEFT_TABLE m join RIGHT_TABLE r  on m.id3 = r.id3 and 
> m.id2 = r.id2  and m.id4 = r.id4  and m.id5 = r.id5  and m.id1 = r.id1  and 
> m.ts = r.ts  where  r.id1 IN ('201904','201905')  and r.id2 = 'ID2_VAL'  and 
> r.id3 IN ('ID3_VAL','ID3_VAL2') 
> {code}
> but When to optimize the query, filters for the left table are also added , 
> query returned empty result . Though the filters are based on join condition 
> so semantically above query and below query should be same.
> {code}
> SELECT m.*,r.* FROM LEFT_TABLE m join RIGHT_TABLE r  on m.id3 = r.id3  and 
> m.id2 = r.id2  and m.id4 = r.id4 and m.id5 = r.id5  and m.id1 = r.id1 and 
> m.ts = r.ts  where m.id1 IN ('201904','201905')  and r.id1 IN 
> ('201904','201905') and r.id2 = 'ID2_VAL' and m.id3 IN 
> ('ID3_VAL','ID3_VAL2')  and r.id3 IN ('ID3_VAL','ID3_VAL2') 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5884) Join query return empty result when filters for both the tables are present

2020-05-05 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5884:
---
Attachment: PHOENIX-5884_v1.patch

> Join query return empty result when filters for both the tables are present
> ---
>
> Key: PHOENIX-5884
> URL: https://issues.apache.org/jira/browse/PHOENIX-5884
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: PHOENIX-5884_v1.patch
>
>
> Let's assume DDL to be same for both the tables involved in a join
> {code}
> CREATE TABLE LeftTable (id1 CHAR(6) NOT NULL,id2 VARCHAR(22) NOT 
> NULL,id3 VARCHAR(12) NOT NULL,id4 CHAR(2) NOT NULL,id5 CHAR(6) 
> NOT NULL, id6 VARCHAR(200) NOT NULL,id7 VARCHAR(50) NOT NULL,ts 
> TIMESTAMP ,CONSTRAINT PK_JOIN_AND_INTERSECTION_TABLE PRIMARY 
> KEY(id1,id2,id3,id4,id5,id6,id7))
> {code}
> Following query return right results
> {code}
> SELECT m.*,r.* FROM LEFT_TABLE m join RIGHT_TABLE r  on m.id3 = r.id3 and 
> m.id2 = r.id2  and m.id4 = r.id4  and m.id5 = r.id5  and m.id1 = r.id1  and 
> m.ts = r.ts  where  r.id1 IN ('201904','201905')  and r.id2 = 'ID2_VAL'  and 
> r.id3 IN ('ID3_VAL','ID3_VAL2') 
> {code}
> but When to optimize the query, filters for the left table are also added , 
> query returned empty result . Though the filters are based on join condition 
> so semantically above query and below query should be same.
> {code}
> SELECT m.*,r.* FROM LEFT_TABLE m join RIGHT_TABLE r  on m.id3 = r.id3  and 
> m.id2 = r.id2  and m.id4 = r.id4 and m.id5 = r.id5  and m.id1 = r.id1 and 
> m.ts = r.ts  where m.id1 IN ('201904','201905')  and r.id1 IN 
> ('201904','201905') and r.id2 = 'ID2_VAL' and m.id3 IN 
> ('ID3_VAL','ID3_VAL2')  and r.id3 IN ('ID3_VAL','ID3_VAL2') 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5884) Join query return empty result when filters for both the tables are present

2020-05-05 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5884:
---
Description: 
Let's assume DDL to be same for both the tables involved in a join
{code}
CREATE TABLE LeftTable (id1 CHAR(6) NOT NULL,id2 VARCHAR(22) NOT NULL,  
  id3 VARCHAR(12) NOT NULL,id4 CHAR(2) NOT NULL,id5 CHAR(6) NOT NULL,   
  id6 VARCHAR(200) NOT NULL,id7 VARCHAR(50) NOT NULL,ts TIMESTAMP ,
CONSTRAINT PK_JOIN_AND_INTERSECTION_TABLE PRIMARY 
KEY(id1,id2,id3,id4,id5,id6,id7))
{code}

Following query return right results
{code}
SELECT m.*,r.* FROM LEFT_TABLE m join RIGHT_TABLE r  on m.id3 = r.id3 and m.id2 
= r.id2  and m.id4 = r.id4  and m.id5 = r.id5  and m.id1 = r.id1  and m.ts = 
r.ts  where  r.id1 IN ('201904','201905')  and r.id2 = 'ID2_VAL'  and r.id3 IN 
('ID3_VAL','ID3_VAL2') 
{code}

but When to optimize the query, filters for the left table are also added , 
query returned empty result . Though the filters are based on join condition so 
semantically above query and below query should be same.
{code}
SELECT m.*,r.* FROM LEFT_TABLE m join RIGHT_TABLE r  on m.id3 = r.id3  and 
m.id2 = r.id2  and m.id4 = r.id4 and m.id5 = r.id5  and m.id1 = r.id1 and m.ts 
= r.ts  where m.id1 IN ('201904','201905')  and r.id1 IN ('201904','201905') 
and r.id2 = 'ID2_VAL' and m.id3 IN ('ID3_VAL','ID3_VAL2')  and 
r.id3 IN ('ID3_VAL','ID3_VAL2') 
{code}

  was:
Let's assume DDL to be same for both the tables involved in a join
{code}
CREATE TABLE LeftTable (id1 CHAR(6) NOT NULL,id2 VARCHAR(22) NOT NULL,  
  id3 VARCHAR(12) NOT NULL,id4 CHAR(2) NOT NULL,id5 CHAR(6) NOT NULL,   
  id6 VARCHAR(200) NOT NULL,id7 VARCHAR(50) NOT NULL,ts TIMESTAMP ,
CONSTRAINT PK_JOIN_AND_INTERSECTION_TABLE PRIMARY 
KEY(id1,id2,id3,id4,id5,id6,id7))
{code}

Following query return right results
{code}
SELECT m.*,r.* FROM LEFT_TABLE m join RIGHT_TABLE r  on m.id3 = r.id3 and m.id2 
= r.id2  and m.id4 = r.id4  and m.id5 = r.id5  and m.id1 = r.id1  and m.ts = 
r.ts  where  r.id1 IN ('201904','201905')  and r.id2 = 'ID2_VAL'  and r.id3 IN 
('ID3_VAL','ID3_VAL2') 
{code

but When to optimize the query, filters for the left table are also added , 
query returned empty result . Though the filters are based on join condition so 
semantically above query and below query should be same.
{code}
SELECT m.*,r.* FROM LEFT_TABLE m join RIGHT_TABLE r  on m.id3 = r.id3  and 
m.id2 = r.id2  and m.id4 = r.id4 and m.id5 = r.id5  and m.id1 = r.id1 and m.ts 
= r.ts  where m.id1 IN ('201904','201905')  and r.id1 IN ('201904','201905') 
and r.id2 = 'ID2_VAL' and m.id3 IN ('ID3_VAL','ID3_VAL2')  and 
r.id3 IN ('ID3_VAL','ID3_VAL2') 
{code}


> Join query return empty result when filters for both the tables are present
> ---
>
> Key: PHOENIX-5884
> URL: https://issues.apache.org/jira/browse/PHOENIX-5884
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
>
> Let's assume DDL to be same for both the tables involved in a join
> {code}
> CREATE TABLE LeftTable (id1 CHAR(6) NOT NULL,id2 VARCHAR(22) NOT 
> NULL,id3 VARCHAR(12) NOT NULL,id4 CHAR(2) NOT NULL,id5 CHAR(6) 
> NOT NULL, id6 VARCHAR(200) NOT NULL,id7 VARCHAR(50) NOT NULL,ts 
> TIMESTAMP ,CONSTRAINT PK_JOIN_AND_INTERSECTION_TABLE PRIMARY 
> KEY(id1,id2,id3,id4,id5,id6,id7))
> {code}
> Following query return right results
> {code}
> SELECT m.*,r.* FROM LEFT_TABLE m join RIGHT_TABLE r  on m.id3 = r.id3 and 
> m.id2 = r.id2  and m.id4 = r.id4  and m.id5 = r.id5  and m.id1 = r.id1  and 
> m.ts = r.ts  where  r.id1 IN ('201904','201905')  and r.id2 = 'ID2_VAL'  and 
> r.id3 IN ('ID3_VAL','ID3_VAL2') 
> {code}
> but When to optimize the query, filters for the left table are also added , 
> query returned empty result . Though the filters are based on join condition 
> so semantically above query and below query should be same.
> {code}
> SELECT m.*,r.* FROM LEFT_TABLE m join RIGHT_TABLE r  on m.id3 = r.id3  and 
> m.id2 = r.id2  and m.id4 = r.id4 and m.id5 = r.id5  and m.id1 = r.id1 and 
> m.ts = r.ts  where m.id1 IN ('201904','201905')  and r.id1 IN 
> ('201904','201905') and r.id2 = 'ID2_VAL' and m.id3 IN 
> ('ID3_VAL','ID3_VAL2')  and r.id3 IN ('ID3_VAL','ID3_VAL2') 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5884) Join query return empty result when filters for both the tables are present

2020-05-05 Thread Ankit Singhal (Jira)
Ankit Singhal created PHOENIX-5884:
--

 Summary: Join query return empty result when filters for both the 
tables are present
 Key: PHOENIX-5884
 URL: https://issues.apache.org/jira/browse/PHOENIX-5884
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal
Assignee: Ankit Singhal


Let's assume DDL to be same for both the tables involved in a join
{code}
CREATE TABLE LeftTable (id1 CHAR(6) NOT NULL,id2 VARCHAR(22) NOT NULL,  
  id3 VARCHAR(12) NOT NULL,id4 CHAR(2) NOT NULL,id5 CHAR(6) NOT NULL,   
  id6 VARCHAR(200) NOT NULL,id7 VARCHAR(50) NOT NULL,ts TIMESTAMP ,
CONSTRAINT PK_JOIN_AND_INTERSECTION_TABLE PRIMARY 
KEY(id1,id2,id3,id4,id5,id6,id7))
{code}

Following query return right results
{code}
SELECT m.*,r.* FROM LEFT_TABLE m join RIGHT_TABLE r  on m.id3 = r.id3 and m.id2 
= r.id2  and m.id4 = r.id4  and m.id5 = r.id5  and m.id1 = r.id1  and m.ts = 
r.ts  where  r.id1 IN ('201904','201905')  and r.id2 = 'ID2_VAL'  and r.id3 IN 
('ID3_VAL','ID3_VAL2') 
{code

but When to optimize the query, filters for the left table are also added , 
query returned empty result . Though the filters are based on join condition so 
semantically above query and below query should be same.
{code}
SELECT m.*,r.* FROM LEFT_TABLE m join RIGHT_TABLE r  on m.id3 = r.id3  and 
m.id2 = r.id2  and m.id4 = r.id4 and m.id5 = r.id5  and m.id1 = r.id1 and m.ts 
= r.ts  where m.id1 IN ('201904','201905')  and r.id1 IN ('201904','201905') 
and r.id2 = 'ID2_VAL' and m.id3 IN ('ID3_VAL','ID3_VAL2')  and 
r.id3 IN ('ID3_VAL','ID3_VAL2') 
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5691) create index is failing when phoenix acls enabled and ranger is enabled

2020-01-23 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5691:
---
Summary: create index is failing when phoenix acls enabled and ranger is 
enabled  (was: create index is failing when phoenix acls enabled when ranger is 
enabled)

> create index is failing when phoenix acls enabled and ranger is enabled
> ---
>
> Key: PHOENIX-5691
> URL: https://issues.apache.org/jira/browse/PHOENIX-5691
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-5691.patch
>
>
> create index failing with following exception when phoenix ACLs enabled.
> {noformat}
>   
> phoenix.acls.enabled
> true
>   
> {noformat}
> {noformat}
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.ClassCastException: 
> org.apache.hadoop.hbase.ipc.HBaseRpcControllerImpl cannot be cast to 
> com.google.protobuf.RpcController
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:103)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:603)
>   at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16537)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8305)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2497)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2479)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42286)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hbase.ipc.HBaseRpcControllerImpl cannot be cast to 
> com.google.protobuf.RpcController
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController$3.getUserPermsFromUserDefinedAccessController(PhoenixAccessController.java:448)
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController$3.run(PhoenixAccessController.java:431)
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController$3.run(PhoenixAccessController.java:418)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:515)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:496)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.hbase.util.Methods.call(Methods.java:40)
>   at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:192)
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.getUserPermissions(PhoenixAccessController.java:418)
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.requireAccess(PhoenixAccessController.java:498)
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.preGetTable(PhoenixAccessController.java:116)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$1.call(PhoenixMetaDataCoprocessorHost.java:157)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$1.call(PhoenixMetaDataCoprocessorHost.java:154)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$PhoenixObserverOperation.callObserver(PhoenixMetaDataCoprocessorHost.java:87)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.execOperation(PhoenixMetaDataCoprocessorHost.java:107)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.preGetTable(PhoenixMetaDataCoprocessorHost.java:154)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:568)
>   ... 9 more
>   at 

[jira] [Updated] (PHOENIX-5691) create index is failing when phoenix acls enabled when ranger is enabled

2020-01-23 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5691:
---
Summary: create index is failing when phoenix acls enabled when ranger is 
enabled  (was: create index is failing when phoenix acls enabled.)

> create index is failing when phoenix acls enabled when ranger is enabled
> 
>
> Key: PHOENIX-5691
> URL: https://issues.apache.org/jira/browse/PHOENIX-5691
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-5691.patch
>
>
> create index failing with following exception when phoenix ACLs enabled.
> {noformat}
>   
> phoenix.acls.enabled
> true
>   
> {noformat}
> {noformat}
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.ClassCastException: 
> org.apache.hadoop.hbase.ipc.HBaseRpcControllerImpl cannot be cast to 
> com.google.protobuf.RpcController
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:103)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:603)
>   at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16537)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8305)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2497)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2479)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42286)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318)
> Caused by: java.lang.ClassCastException: 
> org.apache.hadoop.hbase.ipc.HBaseRpcControllerImpl cannot be cast to 
> com.google.protobuf.RpcController
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController$3.getUserPermsFromUserDefinedAccessController(PhoenixAccessController.java:448)
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController$3.run(PhoenixAccessController.java:431)
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController$3.run(PhoenixAccessController.java:418)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:515)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUser(SecurityUtil.java:496)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.hbase.util.Methods.call(Methods.java:40)
>   at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:192)
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.getUserPermissions(PhoenixAccessController.java:418)
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.requireAccess(PhoenixAccessController.java:498)
>   at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.preGetTable(PhoenixAccessController.java:116)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$1.call(PhoenixMetaDataCoprocessorHost.java:157)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$1.call(PhoenixMetaDataCoprocessorHost.java:154)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$PhoenixObserverOperation.callObserver(PhoenixMetaDataCoprocessorHost.java:87)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.execOperation(PhoenixMetaDataCoprocessorHost.java:107)
>   at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.preGetTable(PhoenixMetaDataCoprocessorHost.java:154)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:568)
>   ... 9 more
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>  

[jira] [Resolved] (PHOENIX-5594) Different permission of phoenix-*-queryserver.log from umask

2019-11-29 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-5594.

Resolution: Fixed

> Different permission of phoenix-*-queryserver.log from umask
> 
>
> Key: PHOENIX-5594
> URL: https://issues.apache.org/jira/browse/PHOENIX-5594
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 4.15.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The permission of phoenix-*-queryserver.log is different from umask we set.
> For example, when we set umask to 077, the permission of 
> phoenix-*-queryserver.log should be 600, but it's 666:
> {code}
> $ umask 077
> $ /bin/queryserver.py start
> starting Query Server, logging to /var/log/hbase/phoenix-hbase-queryserver.log
> $ ll /var/log/hbase/phoenix*
> -rw-rw-rw- 1 hbase hadoop 6181 Nov 27 13:52 phoenix-hbase-queryserver.log
> -rw--- 1 hbase hadoop 1358 Nov 27 13:52 phoenix-hbase-queryserver.out
> {code}
> It looks like the permission of phoenix-*-queryserver.out is correct (600).
> queryserver.py opens QueryServer process as a sub process but it looks like 
> the umask is not inherited. I think we need to inherit the umask to the sub 
> process.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5594) Different permission of phoenix-*-queryserver.log from umask

2019-11-29 Thread Ankit Singhal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-5594:
---
Fix Version/s: 4.15.0

> Different permission of phoenix-*-queryserver.log from umask
> 
>
> Key: PHOENIX-5594
> URL: https://issues.apache.org/jira/browse/PHOENIX-5594
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 4.15.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The permission of phoenix-*-queryserver.log is different from umask we set.
> For example, when we set umask to 077, the permission of 
> phoenix-*-queryserver.log should be 600, but it's 666:
> {code}
> $ umask 077
> $ /bin/queryserver.py start
> starting Query Server, logging to /var/log/hbase/phoenix-hbase-queryserver.log
> $ ll /var/log/hbase/phoenix*
> -rw-rw-rw- 1 hbase hadoop 6181 Nov 27 13:52 phoenix-hbase-queryserver.log
> -rw--- 1 hbase hadoop 1358 Nov 27 13:52 phoenix-hbase-queryserver.out
> {code}
> It looks like the permission of phoenix-*-queryserver.out is correct (600).
> queryserver.py opens QueryServer process as a sub process but it looks like 
> the umask is not inherited. I think we need to inherit the umask to the sub 
> process.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   5   6   7   8   9   10   >