[jira] [Updated] (PHOENIX-6187) Avoid swallowing UPSERT... ON DUPLICATION KEY UPDATE failures

2020-10-14 Thread Mehdi Salarkia (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia updated PHOENIX-6187:

Description: 
When there is an UPSERT with an ON DUPLICATE KEY condition, if the region 
mutation response is a failure it gets ignore and silently swallowed resulting 
in silent write failures. 

This is a major issue resulting in client assuming the write was successful 
while it actually was not. 

Also see another report 
[https://lists.apache.org/thread.html/ra229cea951236d9c48d06ec137f415997f0acb0d99078a6c19290c0e%40%3Cdev.phoenix.apache.org%3E]

 Which could be cause by this issue.

  was:
When there is an UPSERT with an ON DUPLICATE KEY condition, if the region 
mutation response is a failure it gets ignore and silently swallowed resulting 
in silent write failures. 

This is a major issue resulting in client assuming the write was successful 
while it actually was not. 

Also see another report 
[https://lists.apache.org/thread.html/ra229cea951236d9c48d06ec137f415997f0acb0d99078a6c19290c0e%40%3Cdev.phoenix.apache.org%3E]

 


>  Avoid swallowing UPSERT... ON DUPLICATION KEY UPDATE failures
> --
>
> Key: PHOENIX-6187
> URL: https://issues.apache.org/jira/browse/PHOENIX-6187
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Mehdi Salarkia
>Assignee: Mehdi Salarkia
>Priority: Critical
>
> When there is an UPSERT with an ON DUPLICATE KEY condition, if the region 
> mutation response is a failure it gets ignore and silently swallowed 
> resulting in silent write failures. 
> This is a major issue resulting in client assuming the write was successful 
> while it actually was not. 
> Also see another report 
> [https://lists.apache.org/thread.html/ra229cea951236d9c48d06ec137f415997f0acb0d99078a6c19290c0e%40%3Cdev.phoenix.apache.org%3E]
>  Which could be cause by this issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6187) Avoid swallowing UPSERT... ON DUPLICATION KEY UPDATE failures

2020-10-14 Thread Mehdi Salarkia (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia updated PHOENIX-6187:

Description: 
When there is an UPSERT with an ON DUPLICATE KEY condition, if the region 
mutation response is a failure it gets ignore and silently swallowed resulting 
in silent write failures. 

This is a major issue resulting in client assuming the write was successful 
while it actually was not. 

Also see another report 
[https://lists.apache.org/thread.html/ra229cea951236d9c48d06ec137f415997f0acb0d99078a6c19290c0e%40%3Cdev.phoenix.apache.org%3E]

 

  was:
When there is an UPSERT with an ON DUPLICATE KEY condition, if the region 
mutation response is a failure it gets ignore and silently swallowed resulting 
in silent write failures. 

This is a major issue resulting in client assuming the write was successful 
while it actually was not. 


>  Avoid swallowing UPSERT... ON DUPLICATION KEY UPDATE failures
> --
>
> Key: PHOENIX-6187
> URL: https://issues.apache.org/jira/browse/PHOENIX-6187
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Mehdi Salarkia
>Assignee: Mehdi Salarkia
>Priority: Critical
>
> When there is an UPSERT with an ON DUPLICATE KEY condition, if the region 
> mutation response is a failure it gets ignore and silently swallowed 
> resulting in silent write failures. 
> This is a major issue resulting in client assuming the write was successful 
> while it actually was not. 
> Also see another report 
> [https://lists.apache.org/thread.html/ra229cea951236d9c48d06ec137f415997f0acb0d99078a6c19290c0e%40%3Cdev.phoenix.apache.org%3E]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6187) Avoid swallowing UPSERT... ON DUPLICATION KEY UPDATE failures

2020-10-14 Thread Mehdi Salarkia (Jira)
Mehdi Salarkia created PHOENIX-6187:
---

 Summary:  Avoid swallowing UPSERT... ON DUPLICATION KEY UPDATE 
failures
 Key: PHOENIX-6187
 URL: https://issues.apache.org/jira/browse/PHOENIX-6187
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.15.0, 5.0.0
Reporter: Mehdi Salarkia
Assignee: Mehdi Salarkia


When there is an UPSERT with an ON DUPLICATE KEY condition, if the region 
mutation response is a failure it gets ignore and silently swallowed resulting 
in silent write failures. 

This is a major issue resulting in client assuming the write was successful 
while it actually was not. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6074) Let PQS Act As An Admin Tool Rest Endpoint

2020-08-12 Thread Mehdi Salarkia (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia updated PHOENIX-6074:

Summary: Let PQS Act As An Admin Tool Rest Endpoint  (was: Let PQS Act As 
An Admin Tool Endpoint)

> Let PQS Act As An Admin Tool Rest Endpoint
> --
>
> Key: PHOENIX-6074
> URL: https://issues.apache.org/jira/browse/PHOENIX-6074
> Project: Phoenix
>  Issue Type: Improvement
>  Components: queryserver
>Reporter: Mehdi Salarkia
>Assignee: Mehdi Salarkia
>Priority: Minor
>
> In our production environment we need to create a lot of indexes and use 
> indexTool to build the index also sometime use tools like indexScrutiny to 
> verify the health and status of indexes, etc.
> PQS can act as a REST API end point (proxy) that allows developers to call 
> and run commands that phoenix currently support via command line only:
>  * IndexTool
>  * IndexScrutiny
> Benefits:
>  # Allow developers to develop tools in their application to run and 
> integrate phoenix command line tools into their application without a human 
> intervention.
>  # Remove unnecessary access permission to production from non admins.
>  # Simplify the Index management (or any other future command line tool that 
> will be added to phoenix) and remove the possibility of human error, etc.
> I was looking at the implementation PHOENIX-5827 as an example. I think we 
> can simply define a new context and use that to trigger phoenix command line 
> tools from PQS and return the result (perhaps the MR job link,...) to the 
> client.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6074) Let PQS Act As An Admin Tool Endpoint

2020-08-12 Thread Mehdi Salarkia (Jira)
Mehdi Salarkia created PHOENIX-6074:
---

 Summary: Let PQS Act As An Admin Tool Endpoint
 Key: PHOENIX-6074
 URL: https://issues.apache.org/jira/browse/PHOENIX-6074
 Project: Phoenix
  Issue Type: Improvement
  Components: queryserver
Reporter: Mehdi Salarkia
Assignee: Mehdi Salarkia


In our production environment we need to create a lot of indexes and use 
indexTool to build the index also sometime use tools like indexScrutiny to 
verify the health and status of indexes, etc.

PQS can act as a REST API end point (proxy) that allows developers to call and 
run commands that phoenix currently support via command line only:
 * IndexTool
 * IndexScrutiny

Benefits:
 # Allow developers to develop tools in their application to run and integrate 
phoenix command line tools into their application without a human intervention.
 # Remove unnecessary access permission to production from non admins.
 # Simplify the Index management (or any other future command line tool that 
will be added to phoenix) and remove the possibility of human error, etc.

I was looking at the implementation PHOENIX-5827 as an example. I think we can 
simply define a new context and use that to trigger phoenix command line tools 
from PQS and return the result (perhaps the MR job link,...) to the client.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5950) View With Where Clause On A Table With Composite Key Should Be Able To Optimize Queries

2020-06-10 Thread Mehdi Salarkia (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia updated PHOENIX-5950:

Description: 
For a table with a composite primary
{code:java}
CREATE TABLE MY_TABLE (K1 INTEGER NOT NULL, K2 VARCHAR NOT NULL,K3 INTEGER NOT 
NULL, V1 DECIMAL, CONSTRAINT pk PRIMARY KEY (K1, K2, K3))
{code}
when a view is created that has some (and not all) of primary key columns
{code:java}
CREATE VIEW MY_VIEW(v2 VARCHAR, V3 VARCHAR ) AS SELECT * FROM MY_TABLE WHERE K2 
= 'A'
{code}
if you run a query on the view without providing all the primary columns
{code:java}
EXPLAIN SELECT K1, K2, K3, V1 FROM MY_VIEW WHERE (K1,K3) IN ((1,2),(3,4));
++-++--+
|  PLAN 
 | EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
++-++--+
| CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN SKIP SCAN ON 2 KEYS OVER MY_TABLE 
[1,'A'] - [3,'A']  | null| null   | null |
| SERVER FILTER BY (K1, K3) IN 
([128,0,0,1,128,0,0,2],[128,0,0,3,128,0,0,4]) | null
| null   | null |
++-++--+
2 rows selected (0.047 seconds)
{code}
 the query generated is scan rather than a point look up
 same query on the parent table (with all the keys) looks like this
{code:java}
EXPLAIN SELECT K1, K2, K3, V1 FROM MY_TABLE WHERE (K1,K2,K3) IN 
((1,'A',2),(3,'A',4));
+--+-++--+
|   PLAN
   | EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
+--+-++--+
| CLIENT 1-CHUNK 2 ROWS 268 BYTES PARALLEL 1-WAY ROUND ROBIN POINT LOOKUP ON 2 
KEYS OVER MY_TABLE  | 268 | 2  | 0|
+--+-++--+
1 row selected (0.025 seconds)
{code}
The issue is view condition is always added as `AND` to user provided where 
clause and in this case query optimizer is failing to optimize this query to a 
point look up.

 

 -- *[Affected Use 
Case]*  


The impact of this issue is most visible when you try to run IndexScrutiny tool 
on a view with an index which generates queries like: 
{code:java}
SELECT /*+ NO_INDEX */ CAST("K1" AS INTEGER),CAST("K2" AS VARCHAR),CAST("V1" AS 
DECIMAL),CAST("0"."V2" AS VARCHAR),CAST("0"."V3" AS VARCHAR) FROM MY_VIEW WHERE 
("K1","K3") IN ((?,?),(?,?));
{code}
and has very poor performance and causes performance degradation. 

-- *[POSSIBLE 
WORKAROUND]*  


One possible workaround is to provide all the pk (including the view pk 
columns) 
{code:java}
 EXPLAIN SELECT K1, K2, K3, V1, V2, V3 FROM MY_VIEW WHERE (K1,K2,K3) IN 
((1,'A',2),(3,'A',4));
+--+-++--+
|   PLAN
   | EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
+--+-++--+
| CLIENT 1-CHUNK 2 ROWS 632 BYTES PARALLEL 1-WAY ROUND ROBIN POINT LOOKUP ON 2 
KEYS OVER MY_TABLE  | 632 | 2  | 0|
+--+-++--+
{code}
but as you can see the projected _EST_BYTES_READ_ goes up because the 
underlying query that gets executed is something like:
{code:java}
SELECT K1, K2, K3, V1, V2, V3 FROM MY_VIEW WHERE (K1,K2,K3) IN 
((1,'A',2),(3,'A',4)) AND K2 = 'A';
{code}
and certainly the `_AND K2 = 'A'_` is redundant.

 

 

 --  -*[PROPOSED 
SOLUTION]*-  -- 

[jira] [Updated] (PHOENIX-5950) View With Where Clause On A Table With Composite Key Should Be Able To Optimize Queries

2020-06-10 Thread Mehdi Salarkia (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia updated PHOENIX-5950:

Description: 
For a table with a composite primary
{code:java}
CREATE TABLE MY_TABLE (K1 INTEGER NOT NULL, K2 VARCHAR NOT NULL,K3 INTEGER NOT 
NULL, V1 DECIMAL, CONSTRAINT pk PRIMARY KEY (K1, K2, K3))
{code}
when a view is created that has some (and not all) of primary key columns
{code:java}
CREATE VIEW MY_VIEW(v2 VARCHAR, V3 VARCHAR ) AS SELECT * FROM MY_TABLE WHERE K2 
= 'A'
{code}
if you run a query on the view without providing all the primary columns
{code:java}
EXPLAIN SELECT K1, K2, K3, V1 FROM MY_VIEW WHERE (K1,K3) IN ((1,2),(3,4));
++-++--+
|  PLAN 
 | EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
++-++--+
| CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN SKIP SCAN ON 2 KEYS OVER MY_TABLE 
[1,'A'] - [3,'A']  | null| null   | null |
| SERVER FILTER BY (K1, K3) IN 
([128,0,0,1,128,0,0,2],[128,0,0,3,128,0,0,4]) | null
| null   | null |
++-++--+
2 rows selected (0.047 seconds)
{code}
 the query generated is scan rather than a point look up
 same query on the parent table (with all the keys) looks like this
{code:java}
EXPLAIN SELECT K1, K2, K3, V1 FROM MY_TABLE WHERE (K1,K2,K3) IN 
((1,'A',2),(3,'A',4));
+--+-++--+
|   PLAN
   | EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
+--+-++--+
| CLIENT 1-CHUNK 2 ROWS 268 BYTES PARALLEL 1-WAY ROUND ROBIN POINT LOOKUP ON 2 
KEYS OVER MY_TABLE  | 268 | 2  | 0|
+--+-++--+
1 row selected (0.025 seconds)
{code}
The issue is view condition is always added as `AND` to user provided where 
clause and in this case query optimizer is failing to optimize this query to a 
point look up.

 

 -- *[Affected Use 
Case]*  


The impact of this issue is most visible when you try to run IndexScrutiny tool 
a view with an index which generates queries like 
{code:java}
SELECT /*+ NO_INDEX */ CAST("K1" AS INTEGER),CAST("K2" AS VARCHAR),CAST("V1" AS 
DECIMAL),CAST("0"."V2" AS VARCHAR),CAST("0"."V3" AS VARCHAR) FROM MY_VIEW WHERE 
("K1","K3") IN ((?,?),(?,?));
{code}
and has very poor performance and causes performance degradation. 





-- *[POSSIBLE 
WORKAROUND]*  


One possible workaround is to provide all the pk (including the view pk 
columns) 
{code:java}
 EXPLAIN SELECT K1, K2, K3, V1, V2, V3 FROM MY_VIEW WHERE (K1,K2,K3) IN 
((1,'A',2),(3,'A',4));
+--+-++--+
|   PLAN
   | EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
+--+-++--+
| CLIENT 1-CHUNK 2 ROWS 632 BYTES PARALLEL 1-WAY ROUND ROBIN POINT LOOKUP ON 2 
KEYS OVER MY_TABLE  | 632 | 2  | 0|
+--+-++--+
{code}
but as you can see the projected _EST_BYTES_READ_ goes up because the 
underlying query that gets executed is something like:
{code:java}
SELECT K1, K2, K3, V1, V2, V3 FROM MY_VIEW WHERE (K1,K2,K3) IN 
((1,'A',2),(3,'A',4)) AND K2 = 'A';
{code}
and certainly the `_AND K2 = 'A'_` is redundant.

 

 

- -- *[PROPOSED 
SOLUTION]* -- 

[jira] [Assigned] (PHOENIX-5950) View With Where Clause On A Table With Composite Key Should Be Able To Optimize Queries

2020-06-10 Thread Mehdi Salarkia (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia reassigned PHOENIX-5950:
---

Assignee: Mehdi Salarkia

> View With Where Clause On A Table With Composite Key Should Be Able To 
> Optimize Queries 
> 
>
> Key: PHOENIX-5950
> URL: https://issues.apache.org/jira/browse/PHOENIX-5950
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.3
>Reporter: Mehdi Salarkia
>Assignee: Mehdi Salarkia
>Priority: Major
>
> For a table with a composite primary
> {code:java}
> CREATE TABLE MY_TABLE (K1 INTEGER NOT NULL, K2 VARCHAR NOT NULL,K3 INTEGER 
> NOT NULL, V1 DECIMAL, CONSTRAINT pk PRIMARY KEY (K1, K2, K3))
> {code}
> when a view is created that has some (and not all) of primary key columns
> {code:java}
> CREATE VIEW MY_VIEW(v2 VARCHAR, V3 VARCHAR ) AS SELECT * FROM MY_TABLE WHERE 
> K2 = 'A'
> {code}
> if you run a query on the view without providing all the primary columns
> {code:java}
> EXPLAIN SELECT K1, K2, K3, V1 FROM MY_VIEW WHERE (K1,K3) IN ((1,2),(3,4));
> ++-++--+
> |  PLAN   
>| EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
> ++-++--+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN SKIP SCAN ON 2 KEYS OVER MY_TABLE 
> [1,'A'] - [3,'A']  | null| null   | null |
> | SERVER FILTER BY (K1, K3) IN 
> ([128,0,0,1,128,0,0,2],[128,0,0,3,128,0,0,4]) | null  
>   | null   | null |
> ++-++--+
> 2 rows selected (0.047 seconds)
> {code}
>  the query generated is scan rather than a point look up
>  same query on the parent table (with all the keys) looks like this
> {code:java}
> EXPLAIN SELECT K1, K2, K3, V1 FROM MY_TABLE WHERE (K1,K2,K3) IN 
> ((1,'A',2),(3,'A',4));
> +--+-++--+
> |   PLAN  
>  | EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
> +--+-++--+
> | CLIENT 1-CHUNK 2 ROWS 268 BYTES PARALLEL 1-WAY ROUND ROBIN POINT LOOKUP ON 
> 2 KEYS OVER MY_TABLE  | 268 | 2  | 0|
> +--+-++--+
> 1 row selected (0.025 seconds)
> {code}
> The issue is view condition is always added as `AND` to user provided where 
> clause and in this case query optimizer is failing to optimize this query to 
> a point look up.
>  
>  
> -- *[POSSIBLE 
> WORKAROUND]*  -**- 
> --
> One possible workaround is to provide all the pk (including the view pk 
> columns) 
> {code:java}
>  EXPLAIN SELECT K1, K2, K3, V1, V2, V3 FROM MY_VIEW WHERE (K1,K2,K3) IN 
> ((1,'A',2),(3,'A',4));
> +--+-++--+
> |   PLAN  
>  | EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
> +--+-++--+
> | CLIENT 1-CHUNK 2 ROWS 632 BYTES PARALLEL 1-WAY ROUND ROBIN POINT LOOKUP ON 
> 2 KEYS OVER MY_TABLE  | 632 | 2  | 0|
> +--+-++--+
> {code}
> but as you can see the projected _EST_BYTES_READ_ goes up because the 
> underlying query that gets executed is something like:
> {code:java}
> SELECT K1, K2, K3, V1, V2, V3 FROM MY_VIEW WHERE (K1,K2,K3) IN 
> ((1,'A',2),(3,'A',4)) AND K2 = 'A';
> {code}
> and certainly the `_AND K2 = 'A'_` is redundant.
>  
>  
> 

[jira] [Updated] (PHOENIX-5950) View With Where Clause On A Table With Composite Key Should Be Able To Optimize Queries

2020-06-10 Thread Mehdi Salarkia (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia updated PHOENIX-5950:

Description: 
For a table with a composite primary
{code:java}
CREATE TABLE MY_TABLE (K1 INTEGER NOT NULL, K2 VARCHAR NOT NULL,K3 INTEGER NOT 
NULL, V1 DECIMAL, CONSTRAINT pk PRIMARY KEY (K1, K2, K3))
{code}
when a view is created that has some (and not all) of primary key columns
{code:java}
CREATE VIEW MY_VIEW(v2 VARCHAR, V3 VARCHAR ) AS SELECT * FROM MY_TABLE WHERE K2 
= 'A'
{code}
if you run a query on the view without providing all the primary columns
{code:java}
EXPLAIN SELECT K1, K2, K3, V1 FROM MY_VIEW WHERE (K1,K3) IN ((1,2),(3,4));
++-++--+
|  PLAN 
 | EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
++-++--+
| CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN SKIP SCAN ON 2 KEYS OVER MY_TABLE 
[1,'A'] - [3,'A']  | null| null   | null |
| SERVER FILTER BY (K1, K3) IN 
([128,0,0,1,128,0,0,2],[128,0,0,3,128,0,0,4]) | null
| null   | null |
++-++--+
2 rows selected (0.047 seconds)
{code}
 the query generated is scan rather than a point look up
 same query on the parent table (with all the keys) looks like this
{code:java}
EXPLAIN SELECT K1, K2, K3, V1 FROM MY_TABLE WHERE (K1,K2,K3) IN 
((1,'A',2),(3,'A',4));
+--+-++--+
|   PLAN
   | EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
+--+-++--+
| CLIENT 1-CHUNK 2 ROWS 268 BYTES PARALLEL 1-WAY ROUND ROBIN POINT LOOKUP ON 2 
KEYS OVER MY_TABLE  | 268 | 2  | 0|
+--+-++--+
1 row selected (0.025 seconds)
{code}
The issue is view condition is always added as `AND` to user provided where 
clause and in this case query optimizer is failing to optimize this query to a 
point look up.

 

 

-- *[POSSIBLE 
WORKAROUND]*  -**- 
--

One possible workaround is to provide all the pk (including the view pk 
columns) 
{code:java}
 EXPLAIN SELECT K1, K2, K3, V1, V2, V3 FROM MY_VIEW WHERE (K1,K2,K3) IN 
((1,'A',2),(3,'A',4));
+--+-++--+
|   PLAN
   | EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
+--+-++--+
| CLIENT 1-CHUNK 2 ROWS 632 BYTES PARALLEL 1-WAY ROUND ROBIN POINT LOOKUP ON 2 
KEYS OVER MY_TABLE  | 632 | 2  | 0|
+--+-++--+
{code}
but as you can see the projected _EST_BYTES_READ_ goes up because the 
underlying query that gets executed is something like:
{code:java}
SELECT K1, K2, K3, V1, V2, V3 FROM MY_VIEW WHERE (K1,K2,K3) IN 
((1,'A',2),(3,'A',4)) AND K2 = 'A';
{code}
and certainly the `_AND K2 = 'A'_` is redundant.

 

 

-- -*[PROPOSED 
SOLUTION]*- 
--
 we can make the view condition to be injected into any partial primary key 
lookup (tuple style conditions) respecting the same order for columns defined 
in the parent table

  was:
For a table with a composite primary
{code:java}
CREATE TABLE MY_TABLE (K1 INTEGER NOT NULL, K2 VARCHAR NOT NULL,K3 INTEGER NOT 
NULL, V1 DECIMAL, CONSTRAINT pk PRIMARY KEY (K1, K2, K3))
{code}
when a view is created that has some (and not all) of primary key columns
{code:java}
CREATE VIEW MY_VIEW(v2 VARCHAR, V3 VARCHAR ) 

[jira] [Updated] (PHOENIX-5950) View With Where Clause On A Table With Composite Key Should Be Able To Optimize Queries

2020-06-10 Thread Mehdi Salarkia (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia updated PHOENIX-5950:

Description: 
For a table with a composite primary
{code:java}
CREATE TABLE MY_TABLE (K1 INTEGER NOT NULL, K2 VARCHAR NOT NULL,K3 INTEGER NOT 
NULL, V1 DECIMAL, CONSTRAINT pk PRIMARY KEY (K1, K2, K3))
{code}
when a view is created that has some (and not all) of primary key columns
{code:java}
CREATE VIEW MY_VIEW(v2 VARCHAR, V3 VARCHAR ) AS SELECT * FROM MY_TABLE WHERE K2 
= 'A'
{code}
if you run a query on the view without providing all the primary columns
{code:java}
EXPLAIN SELECT K1, K2, K3, V1 FROM MY_VIEW WHERE (K1,K3) IN ((1,2),(3,4));
++-++--+
|  PLAN 
 | EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
++-++--+
| CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN SKIP SCAN ON 2 KEYS OVER MY_TABLE 
[1,'A'] - [3,'A']  | null| null   | null |
| SERVER FILTER BY (K1, K3) IN 
([128,0,0,1,128,0,0,2],[128,0,0,3,128,0,0,4]) | null
| null   | null |
++-++--+
2 rows selected (0.047 seconds)
{code}
 the query generated is scan rather than a point look up
 same query on the parent table (with all the keys) looks like this
{code:java}
EXPLAIN SELECT K1, K2, K3, V1 FROM MY_TABLE WHERE (K1,K2,K3) IN 
((1,'A',2),(3,'A',4));
+--+-++--+
|   PLAN
   | EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
+--+-++--+
| CLIENT 1-CHUNK 2 ROWS 268 BYTES PARALLEL 1-WAY ROUND ROBIN POINT LOOKUP ON 2 
KEYS OVER MY_TABLE  | 268 | 2  | 0|
+--+-++--+
1 row selected (0.025 seconds)
{code}
The issue is view condition is always added as `AND` to user provided where 
clause and in this case query optimizer is failing to optimize this query to a 
point look up.

 

 

-- *[POSSIBLE 
WORKAROUND]*  -**- 
--

One possible workaround is to provide all the pk (including the view pk 
columns) 
{code:java}
 EXPLAIN SELECT K1, K2, K3, V1, V2, V3 FROM MY_VIEW WHERE (K1,K2,K3) IN 
((1,'A',2),(3,'A',4));
+--+-++--+
|   PLAN
   | EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
+--+-++--+
| CLIENT 1-CHUNK 2 ROWS 632 BYTES PARALLEL 1-WAY ROUND ROBIN POINT LOOKUP ON 2 
KEYS OVER MY_TABLE  | 632 | 2  | 0|
+--+-++--+
{code}
but as you can see the projected _EST_BYTES_READ_ goes up because the 
underlying query that gets executed is something like:
{code:java}
SELECT K1, K2, K3, V1, V2, V3 FROM MY_VIEW WHERE (K1,K2,K3) IN 
((1,'A',2),(3,'A',4)) AND K2 = 'A';
{code}
and certainly the `_AND K2 = 'A'_` is redundant.

--- *[PROPOSED 
SOLUTION]* 
---
 we can make the view condition to be injected into any partial primary key 
lookup (tuple style conditions) respecting the same order for columns defined 
in the parent table

  was:
For a table with a composite primary
{code:java}
CREATE TABLE MY_TABLE (K1 INTEGER NOT NULL, K2 VARCHAR NOT NULL,K3 INTEGER NOT 
NULL, V1 DECIMAL, CONSTRAINT pk PRIMARY KEY (K1, K2, K3))
{code}
when a view is created that has some (and not all) of primary key columns
{code:java}
CREATE VIEW MY_VIEW(v2 VARCHAR, V3 VARCHAR ) AS 

[jira] [Updated] (PHOENIX-5950) View With Where Clause On A Table With Composite Key Should Be Able To Optimize Queries

2020-06-10 Thread Mehdi Salarkia (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia updated PHOENIX-5950:

Description: 
For a table with a composite primary
{code:java}
CREATE TABLE MY_TABLE (K1 INTEGER NOT NULL, K2 VARCHAR NOT NULL,K3 INTEGER NOT 
NULL, V1 DECIMAL, CONSTRAINT pk PRIMARY KEY (K1, K2, K3))
{code}
when a view is created that has some (and not all) of primary key columns
{code:java}
CREATE VIEW MY_VIEW(v2 VARCHAR, V3 VARCHAR ) AS SELECT * FROM MY_TABLE WHERE K2 
= 'A'
{code}
if you run a query on the view without providing all the primary columns
{code:java}
EXPLAIN SELECT K1, K2, K3, V1 FROM MY_VIEW WHERE (K1,K3) IN ((1,2),(3,4));
++-++--+
|  PLAN 
 | EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
++-++--+
| CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN SKIP SCAN ON 2 KEYS OVER MY_TABLE 
[1,'A'] - [3,'A']  | null| null   | null |
| SERVER FILTER BY (K1, K3) IN 
([128,0,0,1,128,0,0,2],[128,0,0,3,128,0,0,4]) | null
| null   | null |
++-++--+
2 rows selected (0.047 seconds)
{code}
 the query generated is scan rather than a point look up
 same query on the parent table (with all the keys) looks like this
{code:java}
EXPLAIN SELECT K1, K2, K3, V1 FROM MY_TABLE WHERE (K1,K2,K3) IN 
((1,'A',2),(3,'A',4));
+--+-++--+
|   PLAN
   | EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
+--+-++--+
| CLIENT 1-CHUNK 2 ROWS 268 BYTES PARALLEL 1-WAY ROUND ROBIN POINT LOOKUP ON 2 
KEYS OVER MY_TABLE  | 268 | 2  | 0|
+--+-++--+
1 row selected (0.025 seconds)
{code}
The issue is view condition is always added as `AND` to user provided where 
clause and in this case query optimizer is failing to optimize this query to a 
point look up.

 

 

---*[POSSIBLE 
WORKAROUND]*---

One possible workaround is to provide all the pk (including the view pk 
columns) 
{code:java}
 EXPLAIN SELECT K1, K2, K3, V1, V2, V3 FROM MY_VIEW WHERE (K1,K2,K3) IN 
((1,'A',2),(3,'A',4));
+--+-++--+
|   PLAN
   | EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
+--+-++--+
| CLIENT 1-CHUNK 2 ROWS 632 BYTES PARALLEL 1-WAY ROUND ROBIN POINT LOOKUP ON 2 
KEYS OVER MY_TABLE  | 632 | 2  | 0|
+--+-++--+
{code}
but as you can see the projected _EST_BYTES_READ_ goes up because the 
underlying query that gets executed is something like:
{code:java}
SELECT K1, K2, K3, V1, V2, V3 FROM MY_VIEW WHERE (K1,K2,K3) IN 
((1,'A',2),(3,'A',4)) AND K2 = 'A';
{code}
and certainly the `_AND K2 = 'A'_` is redundant.


*[PROPOSED 
SOLUTION]*
we can make the view condition to be injected into any partial primary key 
lookup (tuple style conditions) respecting the same order for columns defined 
in the parent table

  was:
For a table with a composite primary
{code:java}
CREATE TABLE MY_TABLE (K1 INTEGER NOT NULL, K2 VARCHAR NOT NULL,K3 INTEGER NOT 
NULL, V1 DECIMAL, CONSTRAINT pk PRIMARY KEY (K1, K2, K3))
{code}
when a view is created that has some (and not all) of primary key columns
{code:java}
CREATE VIEW MY_VIEW(v2 VARCHAR, V3 VARCHAR ) AS SELECT * 

[jira] [Updated] (PHOENIX-5950) View With Where Clause On A Table With Composite Key Should Be Able To Optimize Queries

2020-06-10 Thread Mehdi Salarkia (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia updated PHOENIX-5950:

Affects Version/s: 5.0.0
   4.14.3

> View With Where Clause On A Table With Composite Key Should Be Able To 
> Optimize Queries 
> 
>
> Key: PHOENIX-5950
> URL: https://issues.apache.org/jira/browse/PHOENIX-5950
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.3
>Reporter: Mehdi Salarkia
>Priority: Major
>
> For a table with a composite primary
> {code:java}
> CREATE TABLE MY_TABLE (K1 INTEGER NOT NULL, K2 VARCHAR NOT NULL,K3 INTEGER 
> NOT NULL, V1 DECIMAL, CONSTRAINT pk PRIMARY KEY (K1, K2, K3))
> {code}
> when a view is created that has some (and not all) of primary key columns
> {code:java}
> CREATE VIEW MY_VIEW(v2 VARCHAR, V3 VARCHAR ) AS SELECT * FROM MY_TABLE WHERE 
> K2 = 'A'
> {code}
> if you run a query on the view without providing all the primary columns
> {code:java}
> EXPLAIN SELECT K1, K2, K3, V1 FROM MY_VIEW WHERE (K1,K3) IN ((1,2),(3,4));
> ++-++--+
> |  PLAN   
>| EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
> ++-++--+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN SKIP SCAN ON 2 KEYS OVER MY_TABLE 
> [1,'A'] - [3,'A']  | null| null   | null |
> | SERVER FILTER BY (K1, K3) IN 
> ([128,0,0,1,128,0,0,2],[128,0,0,3,128,0,0,4]) | null  
>   | null   | null |
> ++-++--+
> 2 rows selected (0.047 seconds)
> {code}
>  the query generated is scan rather than a point look up
>  same query on the parent table (with all the keys) looks like this
> {code:java}
> EXPLAIN SELECT K1, K2, K3, V1 FROM MY_TABLE WHERE (K1,K2,K3) IN 
> ((1,'A',2),(3,'A',4));
> +--+-++--+
> |   PLAN  
>  | EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
> +--+-++--+
> | CLIENT 1-CHUNK 2 ROWS 268 BYTES PARALLEL 1-WAY ROUND ROBIN POINT LOOKUP ON 
> 2 KEYS OVER MY_TABLE  | 268 | 2  | 0|
> +--+-++--+
> 1 row selected (0.025 seconds)
> {code}
> The issue is view condition is always added as `AND` to user provided where 
> clause and in this case query optimizer is failing to optimize this query to 
> a point look up.
>  
>  
> ---*[POSSIBLE 
> WORKAROUND]*---
> One possible workaround is to provide all the pk (including the view pk 
> columns) 
> {code:java}
>  EXPLAIN SELECT K1, K2, K3, V1, V2, V3 FROM MY_VIEW WHERE (K1,K2,K3) IN 
> ((1,'A',2),(3,'A',4));
> +--+-++--+
> |   PLAN  
>  | EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
> +--+-++--+
> | CLIENT 1-CHUNK 2 ROWS 632 BYTES PARALLEL 1-WAY ROUND ROBIN POINT LOOKUP ON 
> 2 KEYS OVER MY_TABLE  | 632 | 2  | 0|
> +--+-++--+
> {code}
> but as you can see the projected _EST_BYTES_READ_ goes up because the 
> underlying query that gets executed is something like:
> {code:java}
> SELECT K1, K2, K3, V1, V2, V3 FROM MY_VIEW WHERE (K1,K2,K3) IN 
> ((1,'A',2),(3,'A',4)) AND K2 = 'A';
> {code}
> and certainly the `_AND K2 = 'A'_` is redundant.
> 

[jira] [Created] (PHOENIX-5950) View With Where Clause On A Table With Composite Key Should Be Able To Optimize Queries

2020-06-10 Thread Mehdi Salarkia (Jira)
Mehdi Salarkia created PHOENIX-5950:
---

 Summary: View With Where Clause On A Table With Composite Key 
Should Be Able To Optimize Queries 
 Key: PHOENIX-5950
 URL: https://issues.apache.org/jira/browse/PHOENIX-5950
 Project: Phoenix
  Issue Type: Bug
Reporter: Mehdi Salarkia


For a table with a composite primary
{code:java}
CREATE TABLE MY_TABLE (K1 INTEGER NOT NULL, K2 VARCHAR NOT NULL,K3 INTEGER NOT 
NULL, V1 DECIMAL, CONSTRAINT pk PRIMARY KEY (K1, K2, K3))
{code}
when a view is created that has some (and not all) of primary key columns
{code:java}
CREATE VIEW MY_VIEW(v2 VARCHAR, V3 VARCHAR ) AS SELECT * FROM MY_TABLE WHERE K2 
= 'A'
{code}
if you run a query on the view without providing all the primary columns
{code:java}
EXPLAIN SELECT K1, K2, K3, V1 FROM MY_VIEW WHERE (K1,K3) IN ((1,2),(3,4));
++-++--+
|  PLAN 
 | EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
++-++--+
| CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN SKIP SCAN ON 2 KEYS OVER MY_TABLE 
[1,'A'] - [3,'A']  | null| null   | null |
| SERVER FILTER BY (K1, K3) IN 
([128,0,0,1,128,0,0,2],[128,0,0,3,128,0,0,4]) | null
| null   | null |
++-++--+
2 rows selected (0.047 seconds)
{code}
 the query generated is scan rather than a point look up
same query on the parent table looks like this


{code:java}
EXPLAIN SELECT K1, K2, K3, V1 FROM MY_TABLE WHERE (K1,K2,K3) IN 
((1,'A',2),(3,'A',4));
+--+-++--+
|   PLAN
   | EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
+--+-++--+
| CLIENT 1-CHUNK 2 ROWS 268 BYTES PARALLEL 1-WAY ROUND ROBIN POINT LOOKUP ON 2 
KEYS OVER MY_TABLE  | 268 | 2  | 0|
+--+-++--+
1 row selected (0.025 seconds)
{code}
The issue is view condition is always added as `AND` to user provided where 
clause and in this case query optimizer is failing to optimize this query to a 
point look up.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5421) Phoenix Query server tests race condition issue on creating keytab folder

2019-08-02 Thread Mehdi Salarkia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia updated PHOENIX-5421:

Attachment: PHOENIX-5421.patch

> Phoenix Query server tests race condition issue on creating keytab folder
> -
>
> Key: PHOENIX-5421
> URL: https://issues.apache.org/jira/browse/PHOENIX-5421
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mehdi Salarkia
>Assignee: Mehdi Salarkia
>Priority: Blocker
> Fix For: queryserver-1.0.0, 4.14.3
>
> Attachments: PHOENIX-5421.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The two recently modified tests: 
>  * org.apache.phoenix.end2end.HttpParamImpersonationQueryServerIT
>  * org.apache.phoenix.end2end.SecureQueryServerIT
> Share the same logic to construct a kerberos configuration folder and run a 
> mini kerberized cluser. Which could run into race condition when one deletes 
> the folder while the other is trying to use it to set up it's cluster.
>  
> {code:java}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.007 
> s <<< FAILURE! - in org.apache.phoenix.end2end.SecureQueryServerIT
> [ERROR] org.apache.phoenix.end2end.SecureQueryServerIT Time elapsed: 0.007 s 
> <<< ERROR!
> java.io.IOException: Login failure for securecluster/localh...@example.com 
> from keytab 
> /Users/msalarkia/workspace/apache-phoenix/phoenix-queryserver/target/AbstractKerberisedTest/keytabs/test.keytab:
>  javax.security.auth.login.LoginException: Checksum failed
> at 
> org.apache.phoenix.end2end.SecureQueryServerIT.setUp(SecureQueryServerIT.java:53)
> Caused by: javax.security.auth.login.LoginException: Checksum failed
> at 
> org.apache.phoenix.end2end.SecureQueryServerIT.setUp(SecureQueryServerIT.java:53)
> Caused by: sun.security.krb5.KrbCryptoException: Checksum failed
> at 
> org.apache.phoenix.end2end.SecureQueryServerIT.setUp(SecureQueryServerIT.java:53)
> Caused by: java.security.GeneralSecurityException: Checksum failed
> at 
> org.apache.phoenix.end2end.SecureQueryServerIT.setUp(SecureQueryServerIT.java:53)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-5221) Phoenix Kerberos Integration tests failure on Redhat Linux

2019-08-02 Thread Mehdi Salarkia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia updated PHOENIX-5221:

Attachment: PHOENIX-5221.4.14-HBase-1.4.v1.patch

> Phoenix Kerberos Integration tests failure on Redhat Linux
> --
>
> Key: PHOENIX-5221
> URL: https://issues.apache.org/jira/browse/PHOENIX-5221
> Project: Phoenix
>  Issue Type: Bug
> Environment: Redhat / Centos Linux 
>Reporter: Mehdi Salarkia
>Assignee: Mehdi Salarkia
>Priority: Blocker
> Fix For: 4.15.0, queryserver-1.0.0, 4.14.3
>
> Attachments: PHOENIX-5221.4.14-HBase-1.4.v1.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Due to this bug https://bugzilla.redhat.com/show_bug.cgi?id=668830
> We need to use `localhost.localdomain` when running these tests on Jenkins 
> (Centos)
>  but for Mac OS it should be `localhost` to pass.
> The reason is kerberos principals in this tests are looked up from /etc/hosts
>  and 127.0.0.1 is resolved to `localhost.localdomain` rather than `localhost` 
> on Redhat
>  KDC sees `localhost` != `localhost.localdomain` and as the result test fails 
> with authentication error.
>  It's also important to note these principals are shared between HDFs and 
> HBase in this mini HBase cluster.
>  Some more reading https://access.redhat.com/solutions/57330



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (PHOENIX-5421) Phoenix Query server tests race condition issue on creating keytab folder

2019-08-02 Thread Mehdi Salarkia (JIRA)
Mehdi Salarkia created PHOENIX-5421:
---

 Summary: Phoenix Query server tests race condition issue on 
creating keytab folder
 Key: PHOENIX-5421
 URL: https://issues.apache.org/jira/browse/PHOENIX-5421
 Project: Phoenix
  Issue Type: Bug
Reporter: Mehdi Salarkia
Assignee: Mehdi Salarkia


The two recently modified tests: 
 * org.apache.phoenix.end2end.HttpParamImpersonationQueryServerIT


 * org.apache.phoenix.end2end.SecureQueryServerIT

Share the same logic to construct a kerberos configuration folder and run a 
mini kerberized cluser. Which could run into race condition when one deletes 
the folder while the other is trying to use it to set up it's cluster.

 
{code:java}
[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.007 s 
<<< FAILURE! - in org.apache.phoenix.end2end.SecureQueryServerIT
[ERROR] org.apache.phoenix.end2end.SecureQueryServerIT Time elapsed: 0.007 s 
<<< ERROR!
java.io.IOException: Login failure for securecluster/localh...@example.com from 
keytab 
/Users/msalarkia/workspace/apache-phoenix/phoenix-queryserver/target/AbstractKerberisedTest/keytabs/test.keytab:
 javax.security.auth.login.LoginException: Checksum failed
at 
org.apache.phoenix.end2end.SecureQueryServerIT.setUp(SecureQueryServerIT.java:53)
Caused by: javax.security.auth.login.LoginException: Checksum failed
at 
org.apache.phoenix.end2end.SecureQueryServerIT.setUp(SecureQueryServerIT.java:53)
Caused by: sun.security.krb5.KrbCryptoException: Checksum failed
at 
org.apache.phoenix.end2end.SecureQueryServerIT.setUp(SecureQueryServerIT.java:53)
Caused by: java.security.GeneralSecurityException: Checksum failed
at 
org.apache.phoenix.end2end.SecureQueryServerIT.setUp(SecureQueryServerIT.java:53)
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-5104) PHOENIX-3547 breaks client backwards compatability

2019-07-28 Thread Mehdi Salarkia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia updated PHOENIX-5104:

Attachment: PHOENIX-5104.4.x-HBase-1.3.v1.patch

> PHOENIX-3547 breaks client backwards compatability
> --
>
> Key: PHOENIX-5104
> URL: https://issues.apache.org/jira/browse/PHOENIX-5104
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: Lars Hofhansl
>Assignee: Mehdi Salarkia
>Priority: Blocker
>  Labels: SFDC
> Fix For: 4.15.0
>
> Attachments: PHOENIX-5104.4.x-HBase-1.3.v1.patch, PHOENIX-5104.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Scenario:
> * New 4.15 client
> ** {{create table ns1.test (pk1 integer not null, pk2 integer not null, pk3 
> integer not null, v1 float, v2 float, v3 integer CONSTRAINT pk PRIMARY KEY 
> (pk1, pk2, pk3));}}
> ** {{create local index l1 on ns1.test(v1);}}
> * Old 4.14.x client
> ** {{explain select count\(*) from test t1 where t1.v1 < 0.01;}}
> Result:
> {code}
> 0: jdbc:phoenix:localhost> explain select count(*) from ns1.test t1 where 
> t1.v1 < 0.01;
> Error: ERROR 201 (22000): Illegal data. Expected length of at least 8 bytes, 
> but had 2 (state=22000,code=201)
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least 8 bytes, but had 2
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> at 
> org.apache.phoenix.schema.types.PDataType.checkForSufficientLength(PDataType.java:290)
> at 
> org.apache.phoenix.schema.types.PLong$LongCodec.decodeLong(PLong.java:256)
> at org.apache.phoenix.schema.types.PLong.toObject(PLong.java:115)
> at org.apache.phoenix.schema.types.PLong.toObject(PLong.java:31)
> at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:994)
> at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1035)
> at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1031)
> at 
> org.apache.phoenix.iterate.ExplainTable.appendPKColumnValue(ExplainTable.java:207)
> at 
> org.apache.phoenix.iterate.ExplainTable.appendScanRow(ExplainTable.java:282)
> at 
> org.apache.phoenix.iterate.ExplainTable.appendKeyRanges(ExplainTable.java:297)
> at 
> org.apache.phoenix.iterate.ExplainTable.explain(ExplainTable.java:127)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.explain(BaseResultIterators.java:1544)
> at 
> org.apache.phoenix.iterate.ConcatResultIterator.explain(ConcatResultIterator.java:92)
> at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.explain(BaseGroupedAggregatingResultIterator.java:103)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.getPlanSteps(BaseQueryPlan.java:524)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:372)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:217)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:212)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:207)
> at 
> org.apache.phoenix.execute.BaseQueryPlan.getExplainPlan(BaseQueryPlan.java:516)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:603)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:575)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:302)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:291)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:290)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-5369) BasePermissionsIT.testReadPermsOnTableIndexAndView test uses an incorrect user for permission based operations

2019-06-24 Thread Mehdi Salarkia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia updated PHOENIX-5369:

Environment: 
{code:java}
2.1.1
{code}

  was:
{code:java}

2.1.1
3.0.0
{code}


> BasePermissionsIT.testReadPermsOnTableIndexAndView test uses an incorrect 
> user for permission based operations
> --
>
> Key: PHOENIX-5369
> URL: https://issues.apache.org/jira/browse/PHOENIX-5369
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
> Environment: {code:java}
> 2.1.1
> {code}
>Reporter: Mehdi Salarkia
>Assignee: Mehdi Salarkia
>Priority: Minor
>
> org.apache.phoenix.end2end.BasePermissionsIT uses a regular user for revoking 
> permission on another user while invoking user does not have the permission 
> to do that and as the result runs into the following exception.
> {code:java}
> 2019-06-24 14:05:54,108 DEBUG [main] 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl(131): Call exception, 
> tries=10, retries=16, started=38507 ms ago, cancelled=false, 
> msg=java.io.IOException: 
> org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions (user=regularUser1_N02, scope=hbase:acl, 
> family=l:regularUser2_N03, 
> params=[table=hbase:acl,family=l:regularUser2_N03],action=WRITE)
> at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:185)
> at 
> org.apache.hadoop.hbase.security.access.AccessController.revoke(AccessController.java:2118)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$1.revoke(AccessControlProtos.java:10031)
> at 
> org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService.callMethod(AccessControlProtos.java:10192)
> at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8203)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2423)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2405)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42010)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> Caused by: org.apache.hadoop.hbase.security.AccessDeniedException: 
> Insufficient permissions (user=regularUser1_N02, scope=hbase:acl, 
> family=l:regularUser2_N03, 
> params=[table=hbase:acl,family=l:regularUser2_N03],action=WRITE)
> at 
> org.apache.hadoop.hbase.security.access.AccessController.preDelete(AccessController.java:1552)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$26.call(RegionCoprocessorHost.java:990)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$26.call(RegionCoprocessorHost.java:987)
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:540)
> at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:614)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preDelete(RegionCoprocessorHost.java:987)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.callPreMutateCPHook(HRegion.java:3709)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.access$800(HRegion.java:3470)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation$1.visit(HRegion.java:3539)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$BatchOperation.visitBatchOperations(HRegion.java:3084)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.checkAndPrepare(HRegion.java:3529)
> at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3968)
> at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3902)
> at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3893)
> at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3907)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doBatchMutate(HRegion.java:4234)
> at org.apache.hadoop.hbase.regionserver.HRegion.delete(HRegion.java:2923)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2853)
> at 
> org.apache.hadoop.hbase.client.ClientServiceCallable.doMutate(ClientServiceCallable.java:55)
> at org.apache.hadoop.hbase.client.HTable$2.rpcCall(HTable.java:498)
> at org.apache.hadoop.hbase.client.HTable$2.rpcCall(HTable.java:493)
> at 
> 

[jira] [Created] (PHOENIX-5369) BasePermissionsIT.testReadPermsOnTableIndexAndView test uses an incorrect user for permission based operations

2019-06-24 Thread Mehdi Salarkia (JIRA)
Mehdi Salarkia created PHOENIX-5369:
---

 Summary: BasePermissionsIT.testReadPermsOnTableIndexAndView test 
uses an incorrect user for permission based operations
 Key: PHOENIX-5369
 URL: https://issues.apache.org/jira/browse/PHOENIX-5369
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0
 Environment: {code:java}

2.1.1
3.0.0
{code}
Reporter: Mehdi Salarkia
Assignee: Mehdi Salarkia


org.apache.phoenix.end2end.BasePermissionsIT uses a regular user for revoking 
permission on another user while invoking user does not have the permission to 
do that and as the result runs into the following exception.
{code:java}
2019-06-24 14:05:54,108 DEBUG [main] 
org.apache.hadoop.hbase.client.RpcRetryingCallerImpl(131): Call exception, 
tries=10, retries=16, started=38507 ms ago, cancelled=false, 
msg=java.io.IOException: 
org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
permissions (user=regularUser1_N02, scope=hbase:acl, 
family=l:regularUser2_N03, 
params=[table=hbase:acl,family=l:regularUser2_N03],action=WRITE)
at org.apache.hadoop.hbase.security.User.runAsLoginUser(User.java:185)
at 
org.apache.hadoop.hbase.security.access.AccessController.revoke(AccessController.java:2118)
at 
org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService$1.revoke(AccessControlProtos.java:10031)
at 
org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos$AccessControlService.callMethod(AccessControlProtos.java:10192)
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8203)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2423)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2405)
at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42010)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
Caused by: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
permissions (user=regularUser1_N02, scope=hbase:acl, 
family=l:regularUser2_N03, 
params=[table=hbase:acl,family=l:regularUser2_N03],action=WRITE)
at 
org.apache.hadoop.hbase.security.access.AccessController.preDelete(AccessController.java:1552)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$26.call(RegionCoprocessorHost.java:990)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$26.call(RegionCoprocessorHost.java:987)
at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:540)
at 
org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:614)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preDelete(RegionCoprocessorHost.java:987)
at 
org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.callPreMutateCPHook(HRegion.java:3709)
at 
org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.access$800(HRegion.java:3470)
at 
org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation$1.visit(HRegion.java:3539)
at 
org.apache.hadoop.hbase.regionserver.HRegion$BatchOperation.visitBatchOperations(HRegion.java:3084)
at 
org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.checkAndPrepare(HRegion.java:3529)
at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3968)
at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3902)
at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3893)
at org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3907)
at org.apache.hadoop.hbase.regionserver.HRegion.doBatchMutate(HRegion.java:4234)
at org.apache.hadoop.hbase.regionserver.HRegion.delete(HRegion.java:2923)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2853)
at 
org.apache.hadoop.hbase.client.ClientServiceCallable.doMutate(ClientServiceCallable.java:55)
at org.apache.hadoop.hbase.client.HTable$2.rpcCall(HTable.java:498)
at org.apache.hadoop.hbase.client.HTable$2.rpcCall(HTable.java:493)
at 
org.apache.hadoop.hbase.client.RegionServerCallable.call(RegionServerCallable.java:127)
at 
org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:107)
at org.apache.hadoop.hbase.client.HTable.delete(HTable.java:503)
at 
org.apache.hadoop.hbase.security.access.AccessControlLists.removePermissionRecord(AccessControlLists.java:262)
at 
org.apache.hadoop.hbase.security.access.AccessControlLists.removeUserPermission(AccessControlLists.java:246)
at 

[jira] [Resolved] (PHOENIX-4838) Remove viewIndexId from PHOENIX protobuf

2019-06-24 Thread Mehdi Salarkia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia resolved PHOENIX-4838.
-
Resolution: Fixed

> Remove viewIndexId from PHOENIX protobuf
> 
>
> Key: PHOENIX-4838
> URL: https://issues.apache.org/jira/browse/PHOENIX-4838
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 5.0.0
>Reporter: Mehdi Salarkia
>Priority: Minor
>
> As part of discussion to address PHOENIX-3547 the suggestion was to add a new 
> long property (viewIndexLongId) and remove veiwIndexId which is an int, in 
> the next version of Apache Phoenix to support more number of indices while 
> keeping backward compatibility during the migration process. 
>  More details:
>  [https://github.com/apache/phoenix/pull/317]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5363) Phoenix parcel artifacts for CDH releases don't pass Cloudera verification tools test

2019-06-21 Thread Mehdi Salarkia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia updated PHOENIX-5363:

Description: 
Following instruction from Cloudera 
 [https://github.com/cloudera/cm_ext/wiki/Building-a-parcel]

Running the validation tools you'll see
{code:java}
$java -jar ./target/validator.jar -f 
APACHE_PHOENIX-5.1.0-cdh6.1.1.p0.0-el6.parcel
Validating: APACHE_PHOENIX-5.1.0-cdh6.1.1.p0.0-el6.parcel
==> Warning: Parcel is not compressed with gzip
==> java.io.IOException: Error detected parsing the header
{code}
the reason is to parcel files are not built as tar.gz file but rather a tar 
file.

A valid parcel file should look like this:
{code:java}
$ file GPLEXTRAS-5.16.2-1.cdh5.16.2.p0.8-el5.parcel 
GPLEXTRAS-5.16.2-1.cdh5.16.2.p0.8-el5.parcel: gzip compressed data, last 
modified: Mon Jun 3 13:25:50 2019, from Unix

{code}
while Phoenix Artifacts are  
{code:java}
$ file APACHE_PHOENIX-4.14.1-cdh5.11.2.p0.0-el7.parcel 
APACHE_PHOENIX-4.14.1-cdh5.11.2.p0.0-el7.parcel: POSIX tar archive
{code}

  was:
Following instruction from Cloudera 
 [https://github.com/cloudera/cm_ext/wiki/Building-a-parcel]
 to verify Phoenix CDH artifacts shows that the artifacts are not built as 
tar.gz file but rather a tar file.

A valid parcel file should look like this:
{code:java}
$ file GPLEXTRAS-5.16.2-1.cdh5.16.2.p0.8-el5.parcel 
GPLEXTRAS-5.16.2-1.cdh5.16.2.p0.8-el5.parcel: gzip compressed data, last 
modified: Mon Jun 3 13:25:50 2019, from Unix

{code}
while Phoenix Artifacts are  
{code:java}
$ file APACHE_PHOENIX-4.14.1-cdh5.11.2.p0.0-el7.parcel 
APACHE_PHOENIX-4.14.1-cdh5.11.2.p0.0-el7.parcel: POSIX tar archive
{code}


> Phoenix parcel artifacts for CDH releases don't pass Cloudera verification 
> tools test
> -
>
> Key: PHOENIX-5363
> URL: https://issues.apache.org/jira/browse/PHOENIX-5363
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh, 5.1.0-cdh
>Reporter: Mehdi Salarkia
>Assignee: Mehdi Salarkia
>Priority: Minor
>
> Following instruction from Cloudera 
>  [https://github.com/cloudera/cm_ext/wiki/Building-a-parcel]
> Running the validation tools you'll see
> {code:java}
> $java -jar ./target/validator.jar -f 
> APACHE_PHOENIX-5.1.0-cdh6.1.1.p0.0-el6.parcel
> Validating: APACHE_PHOENIX-5.1.0-cdh6.1.1.p0.0-el6.parcel
> ==> Warning: Parcel is not compressed with gzip
> ==> java.io.IOException: Error detected parsing the header
> {code}
> the reason is to parcel files are not built as tar.gz file but rather a tar 
> file.
> A valid parcel file should look like this:
> {code:java}
> $ file GPLEXTRAS-5.16.2-1.cdh5.16.2.p0.8-el5.parcel 
> GPLEXTRAS-5.16.2-1.cdh5.16.2.p0.8-el5.parcel: gzip compressed data, last 
> modified: Mon Jun 3 13:25:50 2019, from Unix
> {code}
> while Phoenix Artifacts are  
> {code:java}
> $ file APACHE_PHOENIX-4.14.1-cdh5.11.2.p0.0-el7.parcel 
> APACHE_PHOENIX-4.14.1-cdh5.11.2.p0.0-el7.parcel: POSIX tar archive
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5363) Phoenix parcel artifacts for CDH releases don't pass Cloudera verification tools test

2019-06-21 Thread Mehdi Salarkia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia updated PHOENIX-5363:

Description: 
Following instruction from Cloudera 
 [https://github.com/cloudera/cm_ext/wiki/Building-a-parcel]
 to verify Phoenix CDH artifacts shows that the artifacts are not built as 
tar.gz file but rather a tar file.

A valid parcel file should look like this:
{code:java}
$ file GPLEXTRAS-5.16.2-1.cdh5.16.2.p0.8-el5.parcel 
GPLEXTRAS-5.16.2-1.cdh5.16.2.p0.8-el5.parcel: gzip compressed data, last 
modified: Mon Jun 3 13:25:50 2019, from Unix

{code}
while Phoenix Artifacts are  
{code:java}
$ file APACHE_PHOENIX-4.14.1-cdh5.11.2.p0.0-el7.parcel 
APACHE_PHOENIX-4.14.1-cdh5.11.2.p0.0-el7.parcel: POSIX tar archive
{code}

  was:
Applying the following instruction from Cloudera 
[https://github.com/cloudera/cm_ext/wiki/Building-a-parcel]
to verify Phoenix CDH artifacts shows that the artifacts are not built as 
tar.gz file but rather a tar file.

A valid parcel file should look like this:
{code:java}
$ file GPLEXTRAS-5.16.2-1.cdh5.16.2.p0.8-el5.parcel 
GPLEXTRAS-5.16.2-1.cdh5.16.2.p0.8-el5.parcel: gzip compressed data, last 
modified: Mon Jun 3 13:25:50 2019, from Unix

{code}
while Phoenix Artifacts are  
{code:java}
$ file APACHE_PHOENIX-4.14.1-cdh5.11.2.p0.0-el7.parcel 
APACHE_PHOENIX-4.14.1-cdh5.11.2.p0.0-el7.parcel: POSIX tar archive
{code}


> Phoenix parcel artifacts for CDH releases don't pass Cloudera verification 
> tools test
> -
>
> Key: PHOENIX-5363
> URL: https://issues.apache.org/jira/browse/PHOENIX-5363
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh, 5.1.0-cdh
>Reporter: Mehdi Salarkia
>Assignee: Mehdi Salarkia
>Priority: Minor
>
> Following instruction from Cloudera 
>  [https://github.com/cloudera/cm_ext/wiki/Building-a-parcel]
>  to verify Phoenix CDH artifacts shows that the artifacts are not built as 
> tar.gz file but rather a tar file.
> A valid parcel file should look like this:
> {code:java}
> $ file GPLEXTRAS-5.16.2-1.cdh5.16.2.p0.8-el5.parcel 
> GPLEXTRAS-5.16.2-1.cdh5.16.2.p0.8-el5.parcel: gzip compressed data, last 
> modified: Mon Jun 3 13:25:50 2019, from Unix
> {code}
> while Phoenix Artifacts are  
> {code:java}
> $ file APACHE_PHOENIX-4.14.1-cdh5.11.2.p0.0-el7.parcel 
> APACHE_PHOENIX-4.14.1-cdh5.11.2.p0.0-el7.parcel: POSIX tar archive
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5363) Phoenix parcel artifacts for CDH releases don't pass Cloudera verification tools test

2019-06-21 Thread Mehdi Salarkia (JIRA)
Mehdi Salarkia created PHOENIX-5363:
---

 Summary: Phoenix parcel artifacts for CDH releases don't pass 
Cloudera verification tools test
 Key: PHOENIX-5363
 URL: https://issues.apache.org/jira/browse/PHOENIX-5363
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.13.2-cdh, 5.1.0-cdh
Reporter: Mehdi Salarkia
Assignee: Mehdi Salarkia


Applying the following instruction from Cloudera 
[https://github.com/cloudera/cm_ext/wiki/Building-a-parcel]
to verify Phoenix CDH artifacts shows that the artifacts are not built as 
tar.gz file but rather a tar file.

A valid parcel file should look like this:
{code:java}
$ file GPLEXTRAS-5.16.2-1.cdh5.16.2.p0.8-el5.parcel 
GPLEXTRAS-5.16.2-1.cdh5.16.2.p0.8-el5.parcel: gzip compressed data, last 
modified: Mon Jun 3 13:25:50 2019, from Unix

{code}
while Phoenix Artifacts are  
{code:java}
$ file APACHE_PHOENIX-4.14.1-cdh5.11.2.p0.0-el7.parcel 
APACHE_PHOENIX-4.14.1-cdh5.11.2.p0.0-el7.parcel: POSIX tar archive
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5158) TestCoveredColumnIndexCodec is failing in CDH6 branch

2019-06-20 Thread Mehdi Salarkia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia reassigned PHOENIX-5158:
---

Assignee: Mehdi Salarkia

> TestCoveredColumnIndexCodec is failing in CDH6 branch
> -
>
> Key: PHOENIX-5158
> URL: https://issues.apache.org/jira/browse/PHOENIX-5158
> Project: Phoenix
>  Issue Type: Task
>Reporter: Pedro Boado
>Assignee: Mehdi Salarkia
>Priority: Major
> Fix For: 5.1.0-cdh
>
>
> {{TestCoveredColumnIndexCodec.testGeneratedIndexUpdates}} is failing in cdh6 
> branch
> {code:java}
> java.lang.AssertionError: Had some index updates, though it should have been 
> covered by the delete
> (...)
> org.apache.phoenix.hbase.index.covered.TestCoveredColumnIndexCodec.ensureNoUpdatesWhenCoveredByDelete(TestCoveredColumnIndexCodec.java:243)
>   at 
> org.apache.phoenix.hbase.index.covered.TestCoveredColumnIndexCodec.testGeneratedIndexUpdates(TestCoveredColumnIndexCodec.java:221)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5221) Phoenix Kerberos Integration tests failure on Redhat Linux

2019-05-30 Thread Mehdi Salarkia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia updated PHOENIX-5221:

Fix Version/s: 4.15.0

> Phoenix Kerberos Integration tests failure on Redhat Linux
> --
>
> Key: PHOENIX-5221
> URL: https://issues.apache.org/jira/browse/PHOENIX-5221
> Project: Phoenix
>  Issue Type: Bug
> Environment: Redhat / Centos Linux 
>Reporter: Mehdi Salarkia
>Assignee: Mehdi Salarkia
>Priority: Major
> Fix For: 4.15.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Due to this bug https://bugzilla.redhat.com/show_bug.cgi?id=668830
> We need to use `localhost.localdomain` when running these tests on Jenkins 
> (Centos)
>  but for Mac OS it should be `localhost` to pass.
> The reason is kerberos principals in this tests are looked up from /etc/hosts
>  and 127.0.0.1 is resolved to `localhost.localdomain` rather than `localhost` 
> on Redhat
>  KDC sees `localhost` != `localhost.localdomain` and as the result test fails 
> with authentication error.
>  It's also important to note these principals are shared between HDFs and 
> HBase in this mini HBase cluster.
>  Some more reading https://access.redhat.com/solutions/57330



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5221) Phoenix Kerberos Integration tests failure on Redhat Linux

2019-05-30 Thread Mehdi Salarkia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia resolved PHOENIX-5221.
-
Resolution: Fixed

> Phoenix Kerberos Integration tests failure on Redhat Linux
> --
>
> Key: PHOENIX-5221
> URL: https://issues.apache.org/jira/browse/PHOENIX-5221
> Project: Phoenix
>  Issue Type: Bug
> Environment: Redhat / Centos Linux 
>Reporter: Mehdi Salarkia
>Assignee: Mehdi Salarkia
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Due to this bug https://bugzilla.redhat.com/show_bug.cgi?id=668830
> We need to use `localhost.localdomain` when running these tests on Jenkins 
> (Centos)
>  but for Mac OS it should be `localhost` to pass.
> The reason is kerberos principals in this tests are looked up from /etc/hosts
>  and 127.0.0.1 is resolved to `localhost.localdomain` rather than `localhost` 
> on Redhat
>  KDC sees `localhost` != `localhost.localdomain` and as the result test fails 
> with authentication error.
>  It's also important to note these principals are shared between HDFs and 
> HBase in this mini HBase cluster.
>  Some more reading https://access.redhat.com/solutions/57330



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5221) Phoenix Kerberos Integration tests failure on Redhat Linux

2019-03-30 Thread Mehdi Salarkia (JIRA)
Mehdi Salarkia created PHOENIX-5221:
---

 Summary: Phoenix Kerberos Integration tests failure on Redhat Linux
 Key: PHOENIX-5221
 URL: https://issues.apache.org/jira/browse/PHOENIX-5221
 Project: Phoenix
  Issue Type: Bug
 Environment: Redhat / Centos Linux 
Reporter: Mehdi Salarkia
Assignee: Mehdi Salarkia


Due to this bug https://bugzilla.redhat.com/show_bug.cgi?id=668830
We need to use `localhost.localdomain` when running these tests on Jenkins 
(Centos)
 but for Mac OS it should be `localhost` to pass.
The reason is kerberos principals in this tests are looked up from /etc/hosts
 and 127.0.0.1 is resolved to `localhost.localdomain` rather than `localhost` 
on Redhat
 KDC sees `localhost` != `localhost.localdomain` and as the result test fails 
with authentication error.
 It's also important to note these principals are shared between HDFs and HBase 
in this mini HBase cluster.
 Some more reading https://access.redhat.com/solutions/57330



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3547) Promote CATALOG.VIEW_INDEX_ID to an int

2018-08-31 Thread Mehdi Salarkia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia updated PHOENIX-3547:

Attachment: 4.x-HBase-1.4.patch
4.x-HBase-0.98.patch
4.x-HBase-1.1.patch
4.x-HBase-1.2.patch
4.x-HBase-1.3.patch
4.x-cdh5.11.patch
4.x-cdh5.12.patch
4.x-cdh5.13.patch
4.x-cdh5.14.patch

> Promote CATALOG.VIEW_INDEX_ID to an int
> ---
>
> Key: PHOENIX-3547
> URL: https://issues.apache.org/jira/browse/PHOENIX-3547
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Jeremy Huffman
>Assignee: Mehdi Salarkia
>Priority: Minor
> Attachments: 4.x-HBase-0.98.patch, 4.x-HBase-1.1.patch, 
> 4.x-HBase-1.2.patch, 4.x-HBase-1.3.patch, 4.x-HBase-1.4.patch, 
> 4.x-cdh5.11.patch, 4.x-cdh5.12.patch, 4.x-cdh5.13.patch, 4.x-cdh5.14.patch, 
> master-PHOENIX-3547.patch
>
>
> Increase the size of CATALOG.VIEW_INDEX_ID from smallint to int to support a 
> large number of indexed views on a single table.
> Per James: "The code would just need to be tolerant when reading the data if 
> the length is two byte short versus four byte int. At write time, we'd just 
> always write an int."
> See: 
> https://lists.apache.org/thread.html/22849e4fc73452cee3bea763cf6d5af7164dedcb44573ba6b9f452a2@%3Cuser.phoenix.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3547) Promote CATALOG.VIEW_INDEX_ID to an int

2018-08-31 Thread Mehdi Salarkia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia updated PHOENIX-3547:

Attachment: (was: PHOENIX-3547.patch)

> Promote CATALOG.VIEW_INDEX_ID to an int
> ---
>
> Key: PHOENIX-3547
> URL: https://issues.apache.org/jira/browse/PHOENIX-3547
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Jeremy Huffman
>Assignee: Mehdi Salarkia
>Priority: Minor
> Attachments: master-PHOENIX-3547.patch
>
>
> Increase the size of CATALOG.VIEW_INDEX_ID from smallint to int to support a 
> large number of indexed views on a single table.
> Per James: "The code would just need to be tolerant when reading the data if 
> the length is two byte short versus four byte int. At write time, we'd just 
> always write an int."
> See: 
> https://lists.apache.org/thread.html/22849e4fc73452cee3bea763cf6d5af7164dedcb44573ba6b9f452a2@%3Cuser.phoenix.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4871) Query parser throws exception on parameterized join

2018-08-27 Thread Mehdi Salarkia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia updated PHOENIX-4871:

Attachment: PHOENIX-4871-repo.patch

> Query parser throws exception on parameterized join
> ---
>
> Key: PHOENIX-4871
> URL: https://issues.apache.org/jira/browse/PHOENIX-4871
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
> Environment: This issue exists on version 4 and I could reproduce it 
> on current git repo version 
>Reporter: Mehdi Salarkia
>Priority: Major
> Attachments: PHOENIX-4871-repo.patch
>
>
> When a join select statement has a parameter, Phoenix query parser fails to 
> create query metadata and fails this query :
> {code:java}
> SELECT "A"."a2" FROM "A" JOIN "B" ON ("A"."a1" = "B"."b1" ) WHERE "B"."b2" = 
> ? 
> {code}
> with the following exception: 
>  
> {code:java}
> org.apache.calcite.avatica.AvaticaSqlException: Error -1 (0) : while 
> preparing SQL: SELECT "A"."a2" FROM "A" JOIN "B" ON ("A"."a1" = "B"."b1") 
> WHERE ("B"."b2" = ?) 
> at org.apache.calcite.avatica.Helper.createException(Helper.java:54)
> at org.apache.calcite.avatica.Helper.createException(Helper.java:41)
> at 
> org.apache.calcite.avatica.AvaticaConnection.prepareStatement(AvaticaConnection.java:358)
> at 
> org.apache.calcite.avatica.AvaticaConnection.prepareStatement(AvaticaConnection.java:175)
> at 
> org.apache.phoenix.end2end.QueryServerBasicsIT.testParameterizedJoin(QueryServerBasicsIT.java:377)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
> at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
> at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> java.lang.RuntimeException: java.sql.SQLException: ERROR 2004 (INT05): 
> Parameter value unbound. Parameter at index 1 is unbound
> at org.apache.calcite.avatica.jdbc.JdbcMeta.propagate(JdbcMeta.java:700)
> at org.apache.calcite.avatica.jdbc.JdbcMeta.prepare(JdbcMeta.java:726)
> at org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:195)
> at 
> org.apache.calcite.avatica.remote.Service$PrepareRequest.accept(Service.java:1215)
> at 
> org.apache.calcite.avatica.remote.Service$PrepareRequest.accept(Service.java:1186)
> at 
> org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:94)
> at 
> org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
> at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:127)
> at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at 

[jira] [Created] (PHOENIX-4871) Query parser throws exception on parameterized join

2018-08-27 Thread Mehdi Salarkia (JIRA)
Mehdi Salarkia created PHOENIX-4871:
---

 Summary: Query parser throws exception on parameterized join
 Key: PHOENIX-4871
 URL: https://issues.apache.org/jira/browse/PHOENIX-4871
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.0
 Environment: This issue exists on version 4 and I could reproduce it 
on current git repo version 
Reporter: Mehdi Salarkia


When a join select statement has a parameter, Phoenix query parser fails to 
create query metadata and fails this query :
{code:java}
SELECT "A"."a2" FROM "A" JOIN "B" ON ("A"."a1" = "B"."b1" ) WHERE "B"."b2" = ? 
{code}
with the following exception: 

 
{code:java}
org.apache.calcite.avatica.AvaticaSqlException: Error -1 (0) : while 
preparing SQL: SELECT "A"."a2" FROM "A" JOIN "B" ON ("A"."a1" = "B"."b1") WHERE 
("B"."b2" = ?) 

at org.apache.calcite.avatica.Helper.createException(Helper.java:54)
at org.apache.calcite.avatica.Helper.createException(Helper.java:41)
at 
org.apache.calcite.avatica.AvaticaConnection.prepareStatement(AvaticaConnection.java:358)
at 
org.apache.calcite.avatica.AvaticaConnection.prepareStatement(AvaticaConnection.java:175)
at 
org.apache.phoenix.end2end.QueryServerBasicsIT.testParameterizedJoin(QueryServerBasicsIT.java:377)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
at 
com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
at 
com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
at 
com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
java.lang.RuntimeException: java.sql.SQLException: ERROR 2004 (INT05): 
Parameter value unbound. Parameter at index 1 is unbound
at org.apache.calcite.avatica.jdbc.JdbcMeta.propagate(JdbcMeta.java:700)
at org.apache.calcite.avatica.jdbc.JdbcMeta.prepare(JdbcMeta.java:726)
at org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:195)
at 
org.apache.calcite.avatica.remote.Service$PrepareRequest.accept(Service.java:1215)
at 
org.apache.calcite.avatica.remote.Service$PrepareRequest.accept(Service.java:1186)
at 
org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:94)
at 
org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
at 
org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:127)
at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:534)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at 

[jira] [Updated] (PHOENIX-3547) Promote CATALOG.VIEW_INDEX_ID to an int

2018-08-22 Thread Mehdi Salarkia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia updated PHOENIX-3547:

Attachment: (was: PHOENIX-3547.patch)

> Promote CATALOG.VIEW_INDEX_ID to an int
> ---
>
> Key: PHOENIX-3547
> URL: https://issues.apache.org/jira/browse/PHOENIX-3547
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Jeremy Huffman
>Assignee: Mehdi Salarkia
>Priority: Minor
>
> Increase the size of CATALOG.VIEW_INDEX_ID from smallint to int to support a 
> large number of indexed views on a single table.
> Per James: "The code would just need to be tolerant when reading the data if 
> the length is two byte short versus four byte int. At write time, we'd just 
> always write an int."
> See: 
> https://lists.apache.org/thread.html/22849e4fc73452cee3bea763cf6d5af7164dedcb44573ba6b9f452a2@%3Cuser.phoenix.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3547) Promote CATALOG.VIEW_INDEX_ID to an int

2018-08-22 Thread Mehdi Salarkia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia updated PHOENIX-3547:

Attachment: (was: PHOENIX-3547.patch)

> Promote CATALOG.VIEW_INDEX_ID to an int
> ---
>
> Key: PHOENIX-3547
> URL: https://issues.apache.org/jira/browse/PHOENIX-3547
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Jeremy Huffman
>Assignee: Mehdi Salarkia
>Priority: Minor
>
> Increase the size of CATALOG.VIEW_INDEX_ID from smallint to int to support a 
> large number of indexed views on a single table.
> Per James: "The code would just need to be tolerant when reading the data if 
> the length is two byte short versus four byte int. At write time, we'd just 
> always write an int."
> See: 
> https://lists.apache.org/thread.html/22849e4fc73452cee3bea763cf6d5af7164dedcb44573ba6b9f452a2@%3Cuser.phoenix.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3547) Promote CATALOG.VIEW_INDEX_ID to an int

2018-08-22 Thread Mehdi Salarkia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia updated PHOENIX-3547:

Attachment: (was: PHOENIX-3547.patch)

> Promote CATALOG.VIEW_INDEX_ID to an int
> ---
>
> Key: PHOENIX-3547
> URL: https://issues.apache.org/jira/browse/PHOENIX-3547
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Jeremy Huffman
>Assignee: Mehdi Salarkia
>Priority: Minor
>
> Increase the size of CATALOG.VIEW_INDEX_ID from smallint to int to support a 
> large number of indexed views on a single table.
> Per James: "The code would just need to be tolerant when reading the data if 
> the length is two byte short versus four byte int. At write time, we'd just 
> always write an int."
> See: 
> https://lists.apache.org/thread.html/22849e4fc73452cee3bea763cf6d5af7164dedcb44573ba6b9f452a2@%3Cuser.phoenix.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3547) Promote CATALOG.VIEW_INDEX_ID to an int

2018-08-22 Thread Mehdi Salarkia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia updated PHOENIX-3547:

Attachment: (was: PHOENIX-3547.patch)

> Promote CATALOG.VIEW_INDEX_ID to an int
> ---
>
> Key: PHOENIX-3547
> URL: https://issues.apache.org/jira/browse/PHOENIX-3547
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Jeremy Huffman
>Assignee: Mehdi Salarkia
>Priority: Minor
>
> Increase the size of CATALOG.VIEW_INDEX_ID from smallint to int to support a 
> large number of indexed views on a single table.
> Per James: "The code would just need to be tolerant when reading the data if 
> the length is two byte short versus four byte int. At write time, we'd just 
> always write an int."
> See: 
> https://lists.apache.org/thread.html/22849e4fc73452cee3bea763cf6d5af7164dedcb44573ba6b9f452a2@%3Cuser.phoenix.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3547) Promote CATALOG.VIEW_INDEX_ID to an int

2018-08-19 Thread Mehdi Salarkia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia updated PHOENIX-3547:

Attachment: PHOENIX-3547.patch

> Promote CATALOG.VIEW_INDEX_ID to an int
> ---
>
> Key: PHOENIX-3547
> URL: https://issues.apache.org/jira/browse/PHOENIX-3547
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Jeremy Huffman
>Assignee: Mehdi Salarkia
>Priority: Minor
> Attachments: PHOENIX-3547.patch
>
>
> Increase the size of CATALOG.VIEW_INDEX_ID from smallint to int to support a 
> large number of indexed views on a single table.
> Per James: "The code would just need to be tolerant when reading the data if 
> the length is two byte short versus four byte int. At write time, we'd just 
> always write an int."
> See: 
> https://lists.apache.org/thread.html/22849e4fc73452cee3bea763cf6d5af7164dedcb44573ba6b9f452a2@%3Cuser.phoenix.apache.org%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4838) Remove viewIndexId from PHOENIX protobuf

2018-08-08 Thread Mehdi Salarkia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia updated PHOENIX-4838:

Description: 
As part of discussion to address PHOENIX-3547 the suggestion was to add a new 
long property (viewIndexLongId) and remove veiwIndexId which is an int, in the 
next version of Apache Phoenix to support more number of indices while keeping 
backward compatibility during the migration process. 
 More details:
 [https://github.com/apache/phoenix/pull/317]

  was:
As part of discussion to address PHOENIX-3547 the suggestion was to add a new 
long property (viewIndexLongId) and remove veiwIndexId which is an int in the 
next version of Apache Phoenix to support more number of indices while keeping 
backward compatibility during the migration process. 
 More details:
 [https://github.com/apache/phoenix/pull/317]


> Remove viewIndexId from PHOENIX protobuf
> 
>
> Key: PHOENIX-4838
> URL: https://issues.apache.org/jira/browse/PHOENIX-4838
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 5.0.0
>Reporter: Mehdi Salarkia
>Priority: Minor
>
> As part of discussion to address PHOENIX-3547 the suggestion was to add a new 
> long property (viewIndexLongId) and remove veiwIndexId which is an int, in 
> the next version of Apache Phoenix to support more number of indices while 
> keeping backward compatibility during the migration process. 
>  More details:
>  [https://github.com/apache/phoenix/pull/317]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4838) Remove viewIndexId from PHOENIX protobuf

2018-08-07 Thread Mehdi Salarkia (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mehdi Salarkia updated PHOENIX-4838:

Description: 
As part of discussion to address PHOENIX-3547 the suggestion was to add a new 
long property (viewIndexLongId) and remove veiwIndexId which is an int in the 
next version of Apache Phoenix to support more number of indices while keeping 
backward compatibility during the migration process. 
 More details:
 [https://github.com/apache/phoenix/pull/317]

  was:
As part of discussion to address PHOENIX-3547 the suggestion was to add a new 
long property (viewIndexLongId) and remove veiwIndexId which is an int in the 
next version of Apache Phoenix to support more number of indices. 
 More details:
 [https://github.com/apache/phoenix/pull/317]


> Remove viewIndexId from PHOENIX protobuf
> 
>
> Key: PHOENIX-4838
> URL: https://issues.apache.org/jira/browse/PHOENIX-4838
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 5.0.0
>Reporter: Mehdi Salarkia
>Priority: Minor
>
> As part of discussion to address PHOENIX-3547 the suggestion was to add a new 
> long property (viewIndexLongId) and remove veiwIndexId which is an int in the 
> next version of Apache Phoenix to support more number of indices while 
> keeping backward compatibility during the migration process. 
>  More details:
>  [https://github.com/apache/phoenix/pull/317]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4838) Remove viewIndexId from PHOENIX protobuf

2018-08-07 Thread Mehdi Salarkia (JIRA)
Mehdi Salarkia created PHOENIX-4838:
---

 Summary: Remove viewIndexId from PHOENIX protobuf
 Key: PHOENIX-4838
 URL: https://issues.apache.org/jira/browse/PHOENIX-4838
 Project: Phoenix
  Issue Type: Task
Affects Versions: 5.0.0
Reporter: Mehdi Salarkia


As part of discussion to address PHOENIX-3547 the suggestion was to add a new 
long property (viewIndexLongId) and remove veiwIndexId which is an int in the 
next version of Apache Phoenix to support more number of indices. 
 More details:
 [https://github.com/apache/phoenix/pull/317]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)