[jira] [Assigned] (PHOENIX-7364) Make phoenix.updatable.view.restriction.enabled a table level property

2024-07-17 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-7364:


Assignee: Jing Yu

> Make phoenix.updatable.view.restriction.enabled a table level property
> --
>
> Key: PHOENIX-7364
> URL: https://issues.apache.org/jira/browse/PHOENIX-7364
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: Jacob Isaac
>Assignee: Jing Yu
>Priority: Major
>
> The PHOENIX-4555 specifications allow views to be created with certain 
> constraints and conditions.
> It would make sense for this to be a table property so that it gives users 
> more flexibility. When defined for an existing table the upgrade path for the 
> table should ensure that all views under this table should conform to spec. 
> We should store that state (view restriction enabled and verified) in the 
> table header. So that we do not have to do these complete view hierarchy 
> checks for every new view creation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7364) Make phoenix.updatable.view.restriction.enabled a table level property

2024-07-17 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7364:


 Summary: Make phoenix.updatable.view.restriction.enabled a table 
level property
 Key: PHOENIX-7364
 URL: https://issues.apache.org/jira/browse/PHOENIX-7364
 Project: Phoenix
  Issue Type: Improvement
Reporter: Jacob Isaac


The PHOENIX-4555 specifications allow views to be created with certain 
constraints and conditions.

It would make sense for this to be a table property so that it gives users more 
flexibility. When defined for an existing table the upgrade path for the table 
should ensure that all views under this table should conform to spec. We should 
store that state (view restriction enabled and verified) in the table header. 
So that we do not have to do these complete view hierarchy checks for every new 
view creation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [ANNOUNCE] Palash Chauhan as Phoenix Committer

2024-06-13 Thread Jacob Isaac
Congrats Palash!

On Wed, Jun 12, 2024 at 5:45 AM rajeshb...@apache.org <
chrajeshbab...@gmail.com> wrote:

> Congratulations Palash!!
>
> On Wed, Jun 12, 2024, 4:30 PM Kadir Ozdemir
>  wrote:
>
>> Congratulations Palash!
>>
>> On Tue, Jun 11, 2024 at 9:53 PM Viraj Jasani  wrote:
>>
>> > On behalf of the Apache Phoenix PMC, I'm pleased to announce that Palash
>> > Chauhan has accepted the PMC's invitation to become a committer on
>> Apache
>> > Phoenix.
>> >
>> > We appreciate all of the great contributions Palash has made to the
>> > community thus far and we look forward to their continued involvement.
>> >
>> > Congratulations and Welcome, Palash!
>> >
>>
>


[jira] [Assigned] (PHOENIX-7214) Purging expired rows during minor compaction for immutable tables

2024-03-28 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-7214:


Assignee: Jacob Isaac

> Purging expired rows during minor compaction for immutable tables
> -
>
> Key: PHOENIX-7214
> URL: https://issues.apache.org/jira/browse/PHOENIX-7214
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Kadir Ozdemir
>    Assignee: Jacob Isaac
>Priority: Major
>
> HBase minor compaction does not remove deleted or expired cells since the 
> minor compaction works on a subset of HFiles. However, it is safe to remove 
> expired rows for immutable tables. For immutable tables, rows are inserted 
> but not updated. This means a given row will have only one version.This means 
> we can safely remove expired rows during minor compaction using 
> CompactionScanner in Phoenix.
> CompactionScanner currently runs only for major compaction. We can introduce 
> an new table attribute called MINOR_COMPACT_TTL.  Phoenix can run 
> CompactionScanner for minor compaction too for the tables with 
> MINOR_COMPACT_TTL = TRUE. By doing so, the expired rows will be purged during 
> minor compaction for these tables. This will be useful when TTL is less than 
> 7 days, say 2 days, as major compaction typically runs only once a week. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7211) Identify IT tests that can be run successfully against real distributed cluster

2024-02-09 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-7211:


Assignee: Divneet Kaur

> Identify IT tests that can be run successfully against real distributed 
> cluster
> ---
>
> Key: PHOENIX-7211
> URL: https://issues.apache.org/jira/browse/PHOENIX-7211
> Project: Phoenix
>  Issue Type: Sub-task
>    Reporter: Jacob Isaac
>Assignee: Divneet Kaur
>Priority: Major
>
> Categorize/Identify the tests that can be run against real distributed 
> clusters with minimal changes to the tests and test framework.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7211) Identify IT tests that can be run successfully against real distributed cluster

2024-02-09 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7211:


 Summary: Identify IT tests that can be run successfully against 
real distributed cluster
 Key: PHOENIX-7211
 URL: https://issues.apache.org/jira/browse/PHOENIX-7211
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac


Categorize/Identify the tests that can be run against real distributed clusters 
with minimal changes to the tests and test framework.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7210) Ensure Phoenix IT tests can be run against a real distributed cluster

2024-02-09 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7210:


 Summary: Ensure Phoenix IT tests can be run against a real 
distributed cluster
 Key: PHOENIX-7210
 URL: https://issues.apache.org/jira/browse/PHOENIX-7210
 Project: Phoenix
  Issue Type: Improvement
Reporter: Jacob Isaac


When planning a new Phoenix Release in OSS  and subsequent upgrades in 
production, we must ensure that our release build is well tested. Currently 
running the IT test suite ensures that a version (build version) of client and 
server are working as desired. A few backward compatibility tests are also run 
as part of the IT test suite but they are very minimal in coverage. The purpose 
of this JIRA is to explore how we can enhance our IT test framework to provide 
test coverage and backward compatibility for various combinations of 
client-server versions.

Our current OSS release sign-off process is as described 
[here|https://phoenix.apache.org/release.html]


Apache Phoenix follows semantic versioning i.e. for a given version x.y.z, we 
have:

* Major version:
    * x is the major version.
    * A major upgrade needs to be done when you make incompatible API changes. 
There will generally be public-facing APIs that have changed, metadata changes 
and/or changes that affect existing end-user behavior.
* Minor version:
    * y is the minor version.
    * A minor upgrade needs to be done when you add functionality in a 
backwards compatible manner. Any changes to system table schema (for ex: 
SYSTEM.CATALOG) such as addition of columns, must be done in either a minor or 
major version upgrade.
* Patch version:
    * z is the patch version.
    * A patch upgrade can be done when you make backwards compatible bug fixes. 
This is particularly useful in providing a quick minimal change release on top 
of a pre-existing minor/major version release which fixes bugs.


When upgrading the Major/Minor version we typically run tests other than the IT 
tests to cover various client/server combinations that can manifest during an 
upgrade.

1. Client with old Phoenix jar + Servers with mixed Phoenix jars + old metadata 
(few servers have been upgraded)
2. Client with old Phoenix jar + Server with new Phoenix jar + old metadata 
(bits upgraded)
3. Client with old Phoenix jar + Server with new Phoenix jar + new metadata 
(metadata upgraded)
4. Client with new Phoenix jar + Server with new Phoenix jar + new metadata 
(clients upgraded)


It would be a more exhaustive set of tests if we could run the Phoenix IT test 
suites against a distributed cluster with above combinations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7040) Support TTL for views using the new column TTL in SYSTEM.CATALOG

2023-12-06 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac resolved PHOENIX-7040.
--
Resolution: Fixed

> Support TTL for views using the new column TTL in SYSTEM.CATALOG
> 
>
> Key: PHOENIX-7040
> URL: https://issues.apache.org/jira/browse/PHOENIX-7040
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: Jacob Isaac
>Assignee: Lokesh Khurana
>Priority: Major
>
> Allow views to be created with TTL specs.
> Ensure TTL is specified only once in the view hierarchy.
> Child views should inherit TTL values from their parent, when not specified 
> for the given view.
> Indexes should inherit the TTL values from the base tables/views.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7041) Populate ROW_KEY_PREFIX column when creating views

2023-12-06 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac resolved PHOENIX-7041.
--
Resolution: Fixed

> Populate ROW_KEY_PREFIX column when creating views
> --
>
> Key: PHOENIX-7041
> URL: https://issues.apache.org/jira/browse/PHOENIX-7041
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: Jacob Isaac
>    Assignee: Jacob Isaac
>Priority: Major
> Fix For: 5.2.1
>
>
> When a view statement is defined by the constraints articulated in 
> PHOENIX-4555, all rows created by the view will be prefixed by a KeyRange. 
> The view thus can simply be represented by the prefixed KeyRange generated by 
> the expression representing the view statement. 
> The ROW_KEY_PREFIX column will store this KeyRange. The prefixed KeyRange 
> will be used to create a PrefixIndex -> mapping a row prefix to a view.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7108) Provide support for pruning expired rows of views using Phoenix level compactions

2023-11-10 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7108:


 Summary: Provide support for pruning expired rows of views using 
Phoenix level compactions
 Key: PHOENIX-7108
 URL: https://issues.apache.org/jira/browse/PHOENIX-7108
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac
Assignee: Jacob Isaac


Modify Phoenix compaction framework introduced in PHOENIX-6888 to prune TTL 
expired rows of views.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7107) Add support for indexing on SYSTEM.CATALOG table

2023-11-10 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7107:


 Summary: Add support for indexing on SYSTEM.CATALOG table
 Key: PHOENIX-7107
 URL: https://issues.apache.org/jira/browse/PHOENIX-7107
 Project: Phoenix
  Issue Type: Improvement
Reporter: Jacob Isaac
Assignee: Jacob Isaac


With partial indexing available (PHOENIX-7032)
Having the ability to partially index system catalog rows would be useful as it 
would allow us to scan catalog properties more efficiently.
For e.g 
The SYSTEM.CHILD_LINK table can be thought of as a partial index of 
SYSTEM.CATALOG row with link_type=4.
Being able to query which tables/views have the TTL set.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7068) Update Phoenix apache website views page with additional information on usage of views and view indexes

2023-10-12 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7068:


 Summary: Update Phoenix apache website views page with additional 
information on usage of views and view indexes
 Key: PHOENIX-7068
 URL: https://issues.apache.org/jira/browse/PHOENIX-7068
 Project: Phoenix
  Issue Type: Task
Reporter: Jacob Isaac


Document the findings from PHOENIX-4555, PHOENIX-7047, PHOENIX-7067 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7067) View indexes should be created only on non overlapping updatable views

2023-10-12 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7067:


 Summary: View indexes should be created only on non overlapping 
updatable views
 Key: PHOENIX-7067
 URL: https://issues.apache.org/jira/browse/PHOENIX-7067
 Project: Phoenix
  Issue Type: Improvement
Reporter: Jacob Isaac


Updatable views by definition outlined in PHOENIX-4555 are disjoint 
partitions/virtual tables on the base HBase table.
View indexes should only be allowed to be defined on these partitions.
As PHOENIX-7047 revealed index rows are not generated or get clobbered for 
certain multi-level views.

This JIRA will try and address these issues and add the proper constraints on 
when updatable views and view indexes can be created.
1. View should be allowed to extend the parent PK i.e. adding its own PK column 
in the view definition only when there are no indexes in the parent hierarchy.
and vice versa
2. View indexes can defined on a given view only when there are no child views 
that have extended the PK of the base view.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7063) Track and account garbage collected phoenix connections

2023-10-05 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-7063:


Assignee: Lokesh Khurana

> Track and account garbage collected phoenix connections
> ---
>
> Key: PHOENIX-7063
> URL: https://issues.apache.org/jira/browse/PHOENIX-7063
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.1.3
>    Reporter: Jacob Isaac
>Assignee: Lokesh Khurana
>Priority: Major
>
> In production env, misbehaving clients can forget to close Phoenix 
> connections. This can result in Phoenix connections leaking. 
> Moreover, when Phoenix connections are tracked and limited by the 
> GLOBAL_OPEN_PHOENIX_CONNECTIONS metrics counter per jvm, it can lead to 
> client requests for Phoenix connections being rejected.
> Tracking and keeping count of garbage collected Phoenix connections can 
> alleviate the above issues.
> Providing additional logging during such reclaims will provide additional 
> insights into a production env.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7063) Track and account garbage collected phoenix connections

2023-10-05 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7063:


 Summary: Track and account garbage collected phoenix connections
 Key: PHOENIX-7063
 URL: https://issues.apache.org/jira/browse/PHOENIX-7063
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 5.1.3
Reporter: Jacob Isaac


In production env, misbehaving clients can forget to close Phoenix connections. 
This can result in Phoenix connections leaking. 

Moreover, when Phoenix connections are tracked and limited by the 
GLOBAL_OPEN_PHOENIX_CONNECTIONS metrics counter per jvm, it can lead to client 
requests for Phoenix connections being rejected.

Tracking and keeping count of garbage collected Phoenix connections can 
alleviate the above issues.

Providing additional logging during such reclaims will provide additional 
insights into a production env.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7047) Index rows not generated for certain multilevel views

2023-09-21 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-7047:
-
Description: 
 
{code:java}
@Test
public void testTenantViewUpdate() throws Exception {
Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
String schemaName = generateUniqueName();
String dataTableName = generateUniqueName();
String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
String globalViewName = generateUniqueName();
String globalViewFullName = SchemaUtil.getTableName(schemaName, 
globalViewName);
String viewName = generateUniqueName();
String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
String leafViewName = generateUniqueName();
String leafViewFullName = SchemaUtil.getTableName(schemaName, 
leafViewName);
String indexTableName1 = generateUniqueName();
String indexTableFullName1 = SchemaUtil.getTableName(schemaName, 
indexTableName1);

conn.createStatement().execute("CREATE TABLE " + dataTableFullName
+ " (OID CHAR(15) NOT NULL, KP CHAR(3) NOT NULL, VAL1 INTEGER, 
VAL2 INTEGER CONSTRAINT PK PRIMARY KEY (OID, KP)) MULTI_TENANT=true");
conn.commit();
conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS 
%s(ID1 INTEGER not null, COL4 VARCHAR, CONSTRAINT pk PRIMARY KEY (ID1)) AS 
SELECT * FROM %s WHERE KP = 'P01'", globalViewFullName, dataTableFullName));
conn.commit();

conn.createStatement().execute(String.format(
"CREATE INDEX IF NOT EXISTS %s ON %s(ID1) include (COL4)", 
indexTableName1, globalViewFullName));

conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS 
%s(TP INTEGER not null, ROW_ID CHAR(15) NOT NULL,COLA VARCHAR CONSTRAINT pk 
PRIMARY KEY (TP,ROW_ID)) AS SELECT * FROM %s WHERE ID1 = 42724", 
viewFullName,globalViewFullName));
conn.commit();

conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS 
%s AS SELECT * from %s WHERE TP = 32", leafViewFullName, viewFullName));
conn.commit();

conn.createStatement().execute("UPSERT INTO " + leafViewFullName + " 
(OID, ROW_ID, COL4, COLA) values ('00D0y01', 
'00Z0y01','d07223','a05493')");
conn.commit();


TestUtil.dumpTable(conn, TableName.valueOf(dataTableFullName));
TestUtil.dumpTable(conn, TableName.valueOf(("_IDX_" + 
dataTableFullName)));
}
}
{code}

  was:
 
{code:java}
@Test
public void testTenantViewUpdate() throws Exception {
Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
String schemaName = generateUniqueName();
String dataTableName = generateUniqueName();
String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
String globalViewName = generateUniqueName();
String globalViewFullName = SchemaUtil.getTableName(schemaName, 
globalViewName);
String viewName = generateUniqueName();
String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
String leafViewName = generateUniqueName();
String leafViewFullName = SchemaUtil.getTableName(schemaName, 
leafViewName);
String indexTableName1 = generateUniqueName();
String indexTableFullName1 = SchemaUtil.getTableName(schemaName, 
indexTableName1);

conn.createStatement().execute("CREATE TABLE " + dataTableFullName
+ " (OID CHAR(15) NOT NULL, KP CHAR(3) NOT NULL, VAL1 INTEGER, 
VAL2 INTEGER CONSTRAINT PK PRIMARY KEY (OID, KP)) MULTI_TENANT=true");
conn.commit();
conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS 
%s(ID1 INTEGER not null, COL4 VARCHAR, CONSTRAINT pk PRIMARY KEY (ID1)) AS 
SELECT * FROM %s WHERE KP = 'P01'", globalViewFullName, dataTableFullName));
conn.commit();

conn.createStatement().execute(String.format(
"CREATE INDEX IF NOT EXISTS %s ON %s(ID1) include (COL4)", 
indexTableName1, globalViewFullName));

conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS 
%s(TP INTEGER not null, ROW_ID CHAR(15) NOT NULL,COLA VARCHAR CONSTRAINT pk 
PRIMARY KEY (TP,ROW_ID)) AS SELECT * FROM %s WHERE ID1 = 42724", 
viewFullName,globalViewFullName));
conn.commit();

conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS 
%s AS SELECT * from %s WHERE TP = 32", leafViewFullName, viewFullName));
conn.commit();

conn.createStatement().execute(&q

[jira] [Updated] (PHOENIX-7047) Index rows not generated for certain multilevel views

2023-09-21 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-7047:
-
Description: 
 
{code:java}
@Test
public void testTenantViewUpdate() throws Exception {
Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
String schemaName = generateUniqueName();
String dataTableName = generateUniqueName();
String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
String globalViewName = generateUniqueName();
String globalViewFullName = SchemaUtil.getTableName(schemaName, 
globalViewName);
String viewName = generateUniqueName();
String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
String leafViewName = generateUniqueName();
String leafViewFullName = SchemaUtil.getTableName(schemaName, 
leafViewName);
String indexTableName1 = generateUniqueName();
String indexTableFullName1 = SchemaUtil.getTableName(schemaName, 
indexTableName1);

conn.createStatement().execute("CREATE TABLE " + dataTableFullName
+ " (OID CHAR(15) NOT NULL, KP CHAR(3) NOT NULL, VAL1 INTEGER, 
VAL2 INTEGER CONSTRAINT PK PRIMARY KEY (OID, KP)) MULTI_TENANT=true");
conn.commit();
conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS 
%s(ID1 INTEGER not null, COL4 VARCHAR, CONSTRAINT pk PRIMARY KEY (ID1)) AS 
SELECT * FROM %s WHERE KP = 'P01'", globalViewFullName, dataTableFullName));
conn.commit();

conn.createStatement().execute(String.format(
"CREATE INDEX IF NOT EXISTS %s ON %s(ID1) include (COL4)", 
indexTableName1, globalViewFullName));

conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS 
%s(TP INTEGER not null, ROW_ID CHAR(15) NOT NULL,COLA VARCHAR CONSTRAINT pk 
PRIMARY KEY (TP,ROW_ID)) AS SELECT * FROM %s WHERE ID1 = 42724", 
viewFullName,globalViewFullName));
conn.commit();

conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS 
%s AS SELECT * from %s WHERE TP = 32", leafViewFullName, viewFullName));
conn.commit();

conn.createStatement().execute("UPSERT INTO " + leafViewFullName + " 
(OID, ROW_ID, COL4, COLA) values ('00D0y01', 
'00Z0y01','d07223','a05493')");
conn.commit();


TestUtil.dumpTable(conn, TableName.valueOf(dataTableFullName));
TestUtil.dumpTable(conn, TableName.valueOf(("_IDX_" + 
dataTableFullName)));
}
}
}
{code}
 

  was:
@Test
public void testTenantViewUpdate() throws Exception {
Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
String schemaName = generateUniqueName();
String dataTableName = generateUniqueName();
String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
String globalViewName = generateUniqueName();
String globalViewFullName = SchemaUtil.getTableName(schemaName, globalViewName);
String viewName = generateUniqueName();
String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
String leafViewName = generateUniqueName();
String leafViewFullName = SchemaUtil.getTableName(schemaName, leafViewName);
String indexTableName1 = generateUniqueName();
String indexTableFullName1 = SchemaUtil.getTableName(schemaName, 
indexTableName1);

conn.createStatement().execute("CREATE TABLE " + dataTableFullName
+ " (OID CHAR(15) NOT NULL, KP CHAR(3) NOT NULL, VAL1 INTEGER, VAL2 INTEGER 
CONSTRAINT PK PRIMARY KEY (OID, KP)) MULTI_TENANT=true");
conn.commit();
conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS %s(ID1 
INTEGER not null, COL4 VARCHAR, CONSTRAINT pk PRIMARY KEY (ID1)) AS SELECT * 
FROM %s WHERE KP = 'P01'", globalViewFullName, dataTableFullName));
conn.commit();

conn.createStatement().execute(String.format(
"CREATE INDEX IF NOT EXISTS %s ON %s(ID1) include (COL4)", indexTableName1, 
globalViewFullName));

conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS %s(TP 
INTEGER not null, ROW_ID CHAR(15) NOT NULL,COLA VARCHAR CONSTRAINT pk PRIMARY 
KEY (TP,ROW_ID)) AS SELECT * FROM %s WHERE ID1 = 42724", 
viewFullName,globalViewFullName));
conn.commit();

conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS %s AS 
SELECT * from %s WHERE TP = 32", leafViewFullName, viewFullName));
conn.commit();

conn.createStatement().execute("UPSERT INTO " + leafViewFullName + " (OID, 
ROW_ID, COL4, COLA) values ('00D0y01', 
'00Z0y01','d07223','a05493')");
conn.commit();


TestUtil.dumpTable(conn, TableN

[jira] [Created] (PHOENIX-7047) Index rows not generated for certain multilevel views

2023-09-21 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7047:


 Summary: Index rows not generated for certain multilevel views
 Key: PHOENIX-7047
 URL: https://issues.apache.org/jira/browse/PHOENIX-7047
 Project: Phoenix
  Issue Type: Bug
Reporter: Jacob Isaac


@Test
public void testTenantViewUpdate() throws Exception {
Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
String schemaName = generateUniqueName();
String dataTableName = generateUniqueName();
String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName);
String globalViewName = generateUniqueName();
String globalViewFullName = SchemaUtil.getTableName(schemaName, globalViewName);
String viewName = generateUniqueName();
String viewFullName = SchemaUtil.getTableName(schemaName, viewName);
String leafViewName = generateUniqueName();
String leafViewFullName = SchemaUtil.getTableName(schemaName, leafViewName);
String indexTableName1 = generateUniqueName();
String indexTableFullName1 = SchemaUtil.getTableName(schemaName, 
indexTableName1);

conn.createStatement().execute("CREATE TABLE " + dataTableFullName
+ " (OID CHAR(15) NOT NULL, KP CHAR(3) NOT NULL, VAL1 INTEGER, VAL2 INTEGER 
CONSTRAINT PK PRIMARY KEY (OID, KP)) MULTI_TENANT=true");
conn.commit();
conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS %s(ID1 
INTEGER not null, COL4 VARCHAR, CONSTRAINT pk PRIMARY KEY (ID1)) AS SELECT * 
FROM %s WHERE KP = 'P01'", globalViewFullName, dataTableFullName));
conn.commit();

conn.createStatement().execute(String.format(
"CREATE INDEX IF NOT EXISTS %s ON %s(ID1) include (COL4)", indexTableName1, 
globalViewFullName));

conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS %s(TP 
INTEGER not null, ROW_ID CHAR(15) NOT NULL,COLA VARCHAR CONSTRAINT pk PRIMARY 
KEY (TP,ROW_ID)) AS SELECT * FROM %s WHERE ID1 = 42724", 
viewFullName,globalViewFullName));
conn.commit();

conn.createStatement().execute(String.format("CREATE VIEW IF NOT EXISTS %s AS 
SELECT * from %s WHERE TP = 32", leafViewFullName, viewFullName));
conn.commit();

conn.createStatement().execute("UPSERT INTO " + leafViewFullName + " (OID, 
ROW_ID, COL4, COLA) values ('00D0y01', 
'00Z0y01','d07223','a05493')");
conn.commit();


TestUtil.dumpTable(conn, TableName.valueOf(dataTableFullName));
TestUtil.dumpTable(conn, TableName.valueOf(("_IDX_" + dataTableFullName)));
}
}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7046) Query results return different values when PKs of view have DESC order

2023-09-21 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-7046:


Assignee: Viraj Jasani

> Query results return different values when PKs of view have DESC order
> --
>
> Key: PHOENIX-7046
> URL: https://issues.apache.org/jira/browse/PHOENIX-7046
> Project: Phoenix
>  Issue Type: Bug
>        Reporter: Jacob Isaac
>Assignee: Viraj Jasani
>Priority: Major
> Attachments: Screenshot 2023-09-21 at 10.54.08 AM.png
>
>
> To reproduce -
> CREATE TABLE IF NOT EXISTS TEST_ENTITY.T1(OID CHAR(15) NOT NULL,KP CHAR(3) 
> NOT NULL, COL1 VARCHAR CONSTRAINT pk PRIMARY KEY (OID,KP)) 
> MULTI_TENANT=true,COLUMN_ENCODED_BYTES=0;
> CREATE VIEW IF NOT EXISTS TEST_ENTITY.G1_P01(ID1 INTEGER not null, COL4 
> VARCHAR, CONSTRAINT pk PRIMARY KEY (ID1 DESC)) AS SELECT * FROM 
> TEST_ENTITY.T1 WHERE KP = 'P01';
> CREATE VIEW IF NOT EXISTS TEST_ENTITY.TV_P01(ROW_ID CHAR(15) NOT NULL,COLA 
> VARCHAR CONSTRAINT pk PRIMARY KEY (ROW_ID)) AS SELECT * FROM 
> TEST_ENTITY.G1_P01 WHERE ID1 = 42724;
> UPSERT INTO TEST_ENTITY.TV_P01(OID, ROW_ID, COL4, COLA) 
> VALUES('00D0y01', '00Z0y01','d07223','a05493');
> SELECT ID1, COL4 FROM TEST_ENTITY.TV_P01;
> SELECT ID1, COL4 FROM TEST_ENTITY.G1_P01;
>  
>  
> !Screenshot 2023-09-21 at 10.54.08 AM.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7046) Query results return different values when PKs of view have DESC order

2023-09-21 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7046:


 Summary: Query results return different values when PKs of view 
have DESC order
 Key: PHOENIX-7046
 URL: https://issues.apache.org/jira/browse/PHOENIX-7046
 Project: Phoenix
  Issue Type: Bug
Reporter: Jacob Isaac
 Attachments: Screenshot 2023-09-21 at 10.54.08 AM.png

To reproduce -

CREATE TABLE IF NOT EXISTS TEST_ENTITY.T1(OID CHAR(15) NOT NULL,KP CHAR(3) NOT 
NULL, COL1 VARCHAR CONSTRAINT pk PRIMARY KEY (OID,KP)) 
MULTI_TENANT=true,COLUMN_ENCODED_BYTES=0;

CREATE VIEW IF NOT EXISTS TEST_ENTITY.G1_P01(ID1 INTEGER not null, COL4 
VARCHAR, CONSTRAINT pk PRIMARY KEY (ID1 DESC)) AS SELECT * FROM TEST_ENTITY.T1 
WHERE KP = 'P01';

CREATE VIEW IF NOT EXISTS TEST_ENTITY.TV_P01(ROW_ID CHAR(15) NOT NULL,COLA 
VARCHAR CONSTRAINT pk PRIMARY KEY (ROW_ID)) AS SELECT * FROM TEST_ENTITY.G1_P01 
WHERE ID1 = 42724;

UPSERT INTO TEST_ENTITY.TV_P01(OID, ROW_ID, COL4, COLA) 
VALUES('00D0y01', '00Z0y01','d07223','a05493');

SELECT ID1, COL4 FROM TEST_ENTITY.TV_P01;
SELECT ID1, COL4 FROM TEST_ENTITY.G1_P01;

 

 

!Screenshot 2023-09-21 at 10.54.08 AM.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7041) Populate ROW_KEY_PREFIX column when creating views

2023-09-18 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7041:


 Summary: Populate ROW_KEY_PREFIX column when creating views
 Key: PHOENIX-7041
 URL: https://issues.apache.org/jira/browse/PHOENIX-7041
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac
 Fix For: 5.2.1


When a view statement is defined by the constraints articulated in 
PHOENIX-4555, all rows created by the view will be prefixed by a KeyRange. The 
view thus can simply be represented by the prefixed KeyRange generated by the 
expression representing the view statement. 

The ROW_KEY_PREFIX column will store this KeyRange. The prefixed KeyRange will 
be used to create a PrefixIndex -> mapping a row prefix to a view.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7041) Populate ROW_KEY_PREFIX column when creating views

2023-09-18 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-7041:


Assignee: Jacob Isaac

> Populate ROW_KEY_PREFIX column when creating views
> --
>
> Key: PHOENIX-7041
> URL: https://issues.apache.org/jira/browse/PHOENIX-7041
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: Jacob Isaac
>    Assignee: Jacob Isaac
>Priority: Major
> Fix For: 5.2.1
>
>
> When a view statement is defined by the constraints articulated in 
> PHOENIX-4555, all rows created by the view will be prefixed by a KeyRange. 
> The view thus can simply be represented by the prefixed KeyRange generated by 
> the expression representing the view statement. 
> The ROW_KEY_PREFIX column will store this KeyRange. The prefixed KeyRange 
> will be used to create a PrefixIndex -> mapping a row prefix to a view.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7040) Support TTL for views using the new column TTL in SYSTEM.CATALOG

2023-09-18 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7040:


 Summary: Support TTL for views using the new column TTL in 
SYSTEM.CATALOG
 Key: PHOENIX-7040
 URL: https://issues.apache.org/jira/browse/PHOENIX-7040
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac


Allow views to be created with TTL specs.

Ensure TTL is specified only once in the view hierarchy.

Child views should inherit TTL values from their parent, when not specified for 
the given view.

Indexes should inherit the TTL values from the base tables/views.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7040) Support TTL for views using the new column TTL in SYSTEM.CATALOG

2023-09-18 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-7040:


Assignee: Lokesh Khurana

> Support TTL for views using the new column TTL in SYSTEM.CATALOG
> 
>
> Key: PHOENIX-7040
> URL: https://issues.apache.org/jira/browse/PHOENIX-7040
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: Jacob Isaac
>Assignee: Lokesh Khurana
>Priority: Major
>
> Allow views to be created with TTL specs.
> Ensure TTL is specified only once in the view hierarchy.
> Child views should inherit TTL values from their parent, when not specified 
> for the given view.
> Indexes should inherit the TTL values from the base tables/views.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7022) Add new columns TTL and ROWKEY_PREFIX

2023-08-17 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-7022:


Assignee: Lokesh Khurana

> Add new columns TTL and ROWKEY_PREFIX
> -
>
> Key: PHOENIX-7022
> URL: https://issues.apache.org/jira/browse/PHOENIX-7022
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: Jacob Isaac
>Assignee: Lokesh Khurana
>Priority: Major
>
> When a view statement is defined by the constraints articulated in 
> PHOENIX-4555, all rows created by the view will be prefixed by a KeyRange. 
> The view thus can simply be represented by the prefixed KeyRange generated by 
> the expression representing the view statement. In other words, there exists 
> a one-to-one mapping between the view (defined by tenant, schema, tablename) 
> and PREFIXED KeyRange.
> For lookup on the PREFIXED KeyRange we will create a new column ROWKEY_PREFIX 
> in SYSTEM.CATALOG. This new column will be populated during view creation 
> when TTL is specified.
>  
> The TTL column (INTEGER) will store the TTL when specified in line with the 
> HBase spec (which uses an int). The PHOENIX_TTL-related columns and code will 
> be deprecated in a separate jira.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7023) Deprecate columns PHOENIX_TTL and PHOENIX_TTL_HWM and related code

2023-08-17 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-7023:


Assignee: Lokesh Khurana

> Deprecate columns PHOENIX_TTL and PHOENIX_TTL_HWM and related code
> --
>
> Key: PHOENIX-7023
> URL: https://issues.apache.org/jira/browse/PHOENIX-7023
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: Jacob Isaac
>Assignee: Lokesh Khurana
>Priority: Major
>
> Deprecate the old columns PHOENIX_TTL and PHOENIX_TTL_HWM and related code 
> since they are not compatible with the new 
> [design|[https://docs.google.com/document/d/1D2B0G_sVe9eE66bk-sxUfSgoGtQCvD7xBZRxZz-Q1TM/edit]|https://docs.google.com/document/d/1D2B0G_sVe9eE66bk-sxUfSgoGtQCvD7xBZRxZz-Q1TM/edit].]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-6978) Redesign Phoenix TTL for Views

2023-08-17 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-6978:


Assignee: Jacob Isaac  (was: Lokesh Khurana)

> Redesign Phoenix TTL for Views
> --
>
> Key: PHOENIX-6978
> URL: https://issues.apache.org/jira/browse/PHOENIX-6978
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: Jacob Isaac
>    Assignee: Jacob Isaac
>Priority: Major
>
> With Phoenix TTL for views (PHOENIX-3725), the basic gist was the TTL should 
> be a Phoenix view level setting instead of being at the table level as 
> implemented in HBase. More details on the old design are here ([Phoenix TTL 
> old design 
> doc|https://docs.google.com/document/d/1aZWhJQCARBVt9VIXNgINCB8O0fk2GucxXeu7472SVL8/edit#heading=h.kpf13qig3vdl]).
> Both HBase TTL and Phoenix TTL rely on applying expiration logic during the 
> scanning phase when serving query results and apply deletion logic when 
> pruning the rows from the store. In HBase, the pruning is achieved during the 
> compaction phase.
> The initial design and implementation of Phoenix TTL for views used the MR 
> framework to run delete jobs to prune away the expired rows. We knew this was 
> a sub-optimal solution since it required managing and monitoring MR jobs. It 
> would also have introduced additional delete markers which would have 
> temporarily added more rows (delete markers) have made the scans less 
> performant.
> Using the HBase compaction framework instead to prune away the expired rows 
> would fit nicely into the existing architecture and would be efficient like 
> pruning the HBase TTL rows. 
> This jira proposes a redesign of Phoenix TTL for Views using PHOENIX-6888 and 
> PHOENIX-4555
> [New Design 
> doc|https://docs.google.com/document/d/1D2B0G_sVe9eE66bk-sxUfSgoGtQCvD7xBZRxZz-Q1TM/edit]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7023) Deprecate columns PHOENIX_TTL and PHOENIX_TTL_HWM and related code

2023-08-17 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7023:


 Summary: Deprecate columns PHOENIX_TTL and PHOENIX_TTL_HWM and 
related code
 Key: PHOENIX-7023
 URL: https://issues.apache.org/jira/browse/PHOENIX-7023
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac


Deprecate the old columns PHOENIX_TTL and PHOENIX_TTL_HWM and related code 
since they are not compatible with the new 
[design|[https://docs.google.com/document/d/1D2B0G_sVe9eE66bk-sxUfSgoGtQCvD7xBZRxZz-Q1TM/edit]|https://docs.google.com/document/d/1D2B0G_sVe9eE66bk-sxUfSgoGtQCvD7xBZRxZz-Q1TM/edit].]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7022) Add new columns TTL and ROWKEY_PREFIX

2023-08-17 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7022:


 Summary: Add new columns TTL and ROWKEY_PREFIX
 Key: PHOENIX-7022
 URL: https://issues.apache.org/jira/browse/PHOENIX-7022
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac


When a view statement is defined by the constraints articulated in 
PHOENIX-4555, all rows created by the view will be prefixed by a KeyRange. The 
view thus can simply be represented by the prefixed KeyRange generated by the 
expression representing the view statement. In other words, there exists a 
one-to-one mapping between the view (defined by tenant, schema, tablename) and 
PREFIXED KeyRange.

For lookup on the PREFIXED KeyRange we will create a new column ROWKEY_PREFIX 
in SYSTEM.CATALOG. This new column will be populated during view creation when 
TTL is specified.

 

The TTL column (INTEGER) will store the TTL when specified in line with the 
HBase spec (which uses an int). The PHOENIX_TTL-related columns and code will 
be deprecated in a separate jira.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6978) Redesign Phoenix TTL for Views

2023-08-17 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6978:
-
Description: 
With Phoenix TTL for views (PHOENIX-3725), the basic gist was the TTL should be 
a Phoenix view level setting instead of being at the table level as implemented 
in HBase. More details on the old design are here ([Phoenix TTL old design 
doc|https://docs.google.com/document/d/1aZWhJQCARBVt9VIXNgINCB8O0fk2GucxXeu7472SVL8/edit#heading=h.kpf13qig3vdl]).

Both HBase TTL and Phoenix TTL rely on applying expiration logic during the 
scanning phase when serving query results and apply deletion logic when pruning 
the rows from the store. In HBase, the pruning is achieved during the 
compaction phase.

The initial design and implementation of Phoenix TTL for views used the MR 
framework to run delete jobs to prune away the expired rows. We knew this was a 
sub-optimal solution since it required managing and monitoring MR jobs. It 
would also have introduced additional delete markers which would have 
temporarily added more rows (delete markers) have made the scans less 
performant.

Using the HBase compaction framework instead to prune away the expired rows 
would fit nicely into the existing architecture and would be efficient like 
pruning the HBase TTL rows. 

This jira proposes a redesign of Phoenix TTL for Views using PHOENIX-6888 and 
PHOENIX-4555

[New Design 
doc|https://docs.google.com/document/d/1D2B0G_sVe9eE66bk-sxUfSgoGtQCvD7xBZRxZz-Q1TM/edit]

  was:
With Phoenix TTL for views (PHOENIX-3725), the basic gist was the TTL should be 
a Phoenix view level setting instead of being at the table level as implemented 
in HBase. More details on the old design are here ([Phoenix TTL design 
doc|https://docs.google.com/document/d/1aZWhJQCARBVt9VIXNgINCB8O0fk2GucxXeu7472SVL8/edit#heading=h.kpf13qig3vdl]).

Both HBase TTL and Phoenix TTL rely on applying expiration logic during the 
scanning phase when serving query results and apply deletion logic when pruning 
the rows from the store. In HBase, the pruning is achieved during the 
compaction phase.

The initial design and implementation of Phoenix TTL for views used the MR 
framework to run delete jobs to prune away the expired rows. We knew this was a 
sub-optimal solution since it required managing and monitoring MR jobs. It 
would also have introduced additional delete markers which would have 
temporarily added more rows (delete markers) have made the scans less 
performant.

Using the HBase compaction framework instead to prune away the expired rows 
would fit nicely into the existing architecture and would be efficient like 
pruning the HBase TTL rows. 

This jira proposes a redesign of Phoenix TTL for Views using PHOENIX-6888 and 
PHOENIX-4555


> Redesign Phoenix TTL for Views
> --
>
> Key: PHOENIX-6978
> URL: https://issues.apache.org/jira/browse/PHOENIX-6978
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: Jacob Isaac
>Assignee: Lokesh Khurana
>Priority: Major
>
> With Phoenix TTL for views (PHOENIX-3725), the basic gist was the TTL should 
> be a Phoenix view level setting instead of being at the table level as 
> implemented in HBase. More details on the old design are here ([Phoenix TTL 
> old design 
> doc|https://docs.google.com/document/d/1aZWhJQCARBVt9VIXNgINCB8O0fk2GucxXeu7472SVL8/edit#heading=h.kpf13qig3vdl]).
> Both HBase TTL and Phoenix TTL rely on applying expiration logic during the 
> scanning phase when serving query results and apply deletion logic when 
> pruning the rows from the store. In HBase, the pruning is achieved during the 
> compaction phase.
> The initial design and implementation of Phoenix TTL for views used the MR 
> framework to run delete jobs to prune away the expired rows. We knew this was 
> a sub-optimal solution since it required managing and monitoring MR jobs. It 
> would also have introduced additional delete markers which would have 
> temporarily added more rows (delete markers) have made the scans less 
> performant.
> Using the HBase compaction framework instead to prune away the expired rows 
> would fit nicely into the existing architecture and would be efficient like 
> pruning the HBase TTL rows. 
> This jira proposes a redesign of Phoenix TTL for Views using PHOENIX-6888 and 
> PHOENIX-4555
> [New Design 
> doc|https://docs.google.com/document/d/1D2B0G_sVe9eE66bk-sxUfSgoGtQCvD7xBZRxZz-Q1TM/edit]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7021) Design doc for Phoenix view TTL using Phoenix compactions

2023-08-17 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-7021:
-
Description: [Design 
doc|[https://docs.google.com/document/d/1D2B0G_sVe9eE66bk-sxUfSgoGtQCvD7xBZRxZz-Q1TM/edit]]
  (was: [Design 
doc|[http://example.com|https://docs.google.com/document/d/1D2B0G_sVe9eE66bk-sxUfSgoGtQCvD7xBZRxZz-Q1TM/edit]])

> Design doc for Phoenix view TTL using Phoenix compactions
> -
>
> Key: PHOENIX-7021
> URL: https://issues.apache.org/jira/browse/PHOENIX-7021
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: Jacob Isaac
>    Assignee: Jacob Isaac
>Priority: Major
>
> [Design 
> doc|[https://docs.google.com/document/d/1D2B0G_sVe9eE66bk-sxUfSgoGtQCvD7xBZRxZz-Q1TM/edit]]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7021) Design doc for Phoenix view TTL using Phoenix compactions

2023-08-17 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7021:


 Summary: Design doc for Phoenix view TTL using Phoenix compactions
 Key: PHOENIX-7021
 URL: https://issues.apache.org/jira/browse/PHOENIX-7021
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac
Assignee: Jacob Isaac


[Design 
doc|[http://example.com|https://docs.google.com/document/d/1D2B0G_sVe9eE66bk-sxUfSgoGtQCvD7xBZRxZz-Q1TM/edit]]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7017) Recreating a view deletes the metadata in CHILD_LINK table

2023-08-09 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-7017:


 Summary: Recreating a view deletes the metadata in CHILD_LINK table
 Key: PHOENIX-7017
 URL: https://issues.apache.org/jira/browse/PHOENIX-7017
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.2.0, 5.1.4
Reporter: Jacob Isaac


Steps to reproduce :

Create the same view 2 times.

The link from a parent table to its child view (link_type = 4) in the 
SYSTEM.CHILD_LINK is deleted.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-6978) Redesign Phoenix TTL for Views

2023-07-13 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-6978:


Assignee: Lokesh Khurana

> Redesign Phoenix TTL for Views
> --
>
> Key: PHOENIX-6978
> URL: https://issues.apache.org/jira/browse/PHOENIX-6978
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: Jacob Isaac
>Assignee: Lokesh Khurana
>Priority: Major
>
> With Phoenix TTL for views (PHOENIX-3725), the basic gist was the TTL should 
> be a Phoenix view level setting instead of being at the table level as 
> implemented in HBase. More details on the old design are here ([Phoenix TTL 
> design 
> doc|https://docs.google.com/document/d/1aZWhJQCARBVt9VIXNgINCB8O0fk2GucxXeu7472SVL8/edit#heading=h.kpf13qig3vdl]).
> Both HBase TTL and Phoenix TTL rely on applying expiration logic during the 
> scanning phase when serving query results and apply deletion logic when 
> pruning the rows from the store. In HBase, the pruning is achieved during the 
> compaction phase.
> The initial design and implementation of Phoenix TTL for views used the MR 
> framework to run delete jobs to prune away the expired rows. We knew this was 
> a sub-optimal solution since it required managing and monitoring MR jobs. It 
> would also have introduced additional delete markers which would have 
> temporarily added more rows (delete markers) have made the scans less 
> performant.
> Using the HBase compaction framework instead to prune away the expired rows 
> would fit nicely into the existing architecture and would be efficient like 
> pruning the HBase TTL rows. 
> This jira proposes a redesign of Phoenix TTL for Views using PHOENIX-6888 and 
> PHOENIX-4555



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6996) Provide an upgrade path for Phoenix tables with HBase TTL to move their TTL spec to SYSTEM.CATALOG

2023-07-13 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6996:


 Summary: Provide an upgrade path for Phoenix tables with HBase TTL 
to move their TTL spec to SYSTEM.CATALOG
 Key: PHOENIX-6996
 URL: https://issues.apache.org/jira/browse/PHOENIX-6996
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6978) Redesign Phoenix TTL for Views

2023-07-13 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6978:
-
Description: 
With Phoenix TTL for views (PHOENIX-3725), the basic gist was the TTL should be 
a Phoenix view level setting instead of being at the table level as implemented 
in HBase. More details on the old design are here ([Phoenix TTL design 
doc|https://docs.google.com/document/d/1aZWhJQCARBVt9VIXNgINCB8O0fk2GucxXeu7472SVL8/edit#heading=h.kpf13qig3vdl]).

Both HBase TTL and Phoenix TTL rely on applying expiration logic during the 
scanning phase when serving query results and apply deletion logic when pruning 
the rows from the store. In HBase, the pruning is achieved during the 
compaction phase.

The initial design and implementation of Phoenix TTL for views used the MR 
framework to run delete jobs to prune away the expired rows. We knew this was a 
sub-optimal solution since it required managing and monitoring MR jobs. It 
would also have introduced additional delete markers which would have 
temporarily added more rows (delete markers) have made the scans less 
performant.

Using the HBase compaction framework instead to prune away the expired rows 
would fit nicely into the existing architecture and would be efficient like 
pruning the HBase TTL rows. 

This jira proposes a redesign of Phoenix TTL for Views using PHOENIX-6888 and 
PHOENIX-4555

  was:
With Phoenix TTL for views (PHOENIX-3725), the basic gist was the TTL should be 
a Phoenix view level setting instead of being at the table level as implemented 
in HBase. More details on the design are here ([Phoenix TTL design 
doc|https://docs.google.com/document/d/1aZWhJQCARBVt9VIXNgINCB8O0fk2GucxXeu7472SVL8/edit#heading=h.kpf13qig3vdl]).

Both HBase TTL and Phoenix TTL rely on applying expiration logic during the 
scanning phase when serving query results and apply deletion logic when pruning 
the rows from the store. In HBase, the pruning is achieved during the 
compaction phase.

The initial design and implementation of Phoenix TTL for views used the MR 
framework to run delete jobs to prune away the expired rows. We knew this was a 
sub-optimal solution since it required managing and monitoring MR jobs. It 
would also have introduced additional delete markers which would have 
temporarily added more rows (delete markers) have made the scans less 
performant.

Using the HBase compaction framework instead to prune away the expired rows 
would fit nicely into the existing architecture and would be efficient like 
pruning the HBase TTL rows. 

This jira proposes a redesign of Phoenix TTL for Views using PHOENIX-6888 and 
PHOENIX-4555


> Redesign Phoenix TTL for Views
> --
>
> Key: PHOENIX-6978
> URL: https://issues.apache.org/jira/browse/PHOENIX-6978
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: Jacob Isaac
>Priority: Major
>
> With Phoenix TTL for views (PHOENIX-3725), the basic gist was the TTL should 
> be a Phoenix view level setting instead of being at the table level as 
> implemented in HBase. More details on the old design are here ([Phoenix TTL 
> design 
> doc|https://docs.google.com/document/d/1aZWhJQCARBVt9VIXNgINCB8O0fk2GucxXeu7472SVL8/edit#heading=h.kpf13qig3vdl]).
> Both HBase TTL and Phoenix TTL rely on applying expiration logic during the 
> scanning phase when serving query results and apply deletion logic when 
> pruning the rows from the store. In HBase, the pruning is achieved during the 
> compaction phase.
> The initial design and implementation of Phoenix TTL for views used the MR 
> framework to run delete jobs to prune away the expired rows. We knew this was 
> a sub-optimal solution since it required managing and monitoring MR jobs. It 
> would also have introduced additional delete markers which would have 
> temporarily added more rows (delete markers) have made the scans less 
> performant.
> Using the HBase compaction framework instead to prune away the expired rows 
> would fit nicely into the existing architecture and would be efficient like 
> pruning the HBase TTL rows. 
> This jira proposes a redesign of Phoenix TTL for Views using PHOENIX-6888 and 
> PHOENIX-4555



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6979) When phoenix.table.ttl.enabled=true use HBase TTL property value to be the TTL for tables and views

2023-06-14 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6979:


 Summary: When phoenix.table.ttl.enabled=true use HBase TTL 
property value to be the TTL for tables and views
 Key: PHOENIX-6979
 URL: https://issues.apache.org/jira/browse/PHOENIX-6979
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac
Assignee: Lokesh Khurana






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6978) Redesign Phoenix TTL for Views

2023-06-14 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6978:


 Summary: Redesign Phoenix TTL for Views
 Key: PHOENIX-6978
 URL: https://issues.apache.org/jira/browse/PHOENIX-6978
 Project: Phoenix
  Issue Type: Improvement
Reporter: Jacob Isaac


With Phoenix TTL for views (PHOENIX-3725), the basic gist was the TTL should be 
a Phoenix view level setting instead of being at the table level as implemented 
in HBase. More details on the design are here ([Phoenix TTL design 
doc|https://docs.google.com/document/d/1aZWhJQCARBVt9VIXNgINCB8O0fk2GucxXeu7472SVL8/edit#heading=h.kpf13qig3vdl]).

Both HBase TTL and Phoenix TTL rely on applying expiration logic during the 
scanning phase when serving query results and apply deletion logic when pruning 
the rows from the store. In HBase, the pruning is achieved during the 
compaction phase.

The initial design and implementation of Phoenix TTL for views used the MR 
framework to run delete jobs to prune away the expired rows. We knew this was a 
sub-optimal solution since it required managing and monitoring MR jobs. It 
would also have introduced additional delete markers which would have 
temporarily added more rows (delete markers) have made the scans less 
performant.

Using the HBase compaction framework instead to prune away the expired rows 
would fit nicely into the existing architecture and would be efficient like 
pruning the HBase TTL rows. 

This jira proposes a redesign of Phoenix TTL for Views using PHOENIX-6888 and 
PHOENIX-4555



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6910) Scans created during query compilation and execution against salted tables need to be more resilient

2023-05-18 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6910:
-
Attachment: 0001-PHOENIX-6910-initial-commit.patch

> Scans created during query compilation and execution against salted tables 
> need to be more resilient
> 
>
> Key: PHOENIX-6910
> URL: https://issues.apache.org/jira/browse/PHOENIX-6910
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.3
>    Reporter: Jacob Isaac
>Assignee: Istvan Toth
>Priority: Major
> Attachments: 0001-PHOENIX-6910-initial-commit.patch
>
>
> The Scan objects created during where compilation and execution phases are 
> incorrect when salted tables are involved and their regions have moved. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-6910) Scans created during query compilation and execution against salted tables need to be more resilient

2023-03-15 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-6910:


Assignee: Jacob Isaac

> Scans created during query compilation and execution against salted tables 
> need to be more resilient
> 
>
> Key: PHOENIX-6910
> URL: https://issues.apache.org/jira/browse/PHOENIX-6910
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.3
>    Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
>
> The Scan objects created during where compilation and execution phases are 
> incorrect when salted tables are involved and their regions have moved. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6910) Scans created during query compilation and execution against salted tables need to be more resilient

2023-03-15 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6910:


 Summary: Scans created during query compilation and execution 
against salted tables need to be more resilient
 Key: PHOENIX-6910
 URL: https://issues.apache.org/jira/browse/PHOENIX-6910
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.1.3
Reporter: Jacob Isaac


The Scan objects created during where compilation and execution phases are 
incorrect when salted tables are involved and their regions have moved. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6752) Duplicate expression nodes in extract nodes during WHERE compilation phase leads to poor performance.

2022-09-28 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6752:
-
Description: 
SQL queries using the OR operator were taking a long time during the WHERE 
clause compilation phase when a large number of OR clauses (~50k) are used.

The key observation was that during the AND/OR processing, when there are a 
large number of OR expression nodes the same set of extracted nodes was getting 
added.

Thus bloating the set size and slowing down the processing.

[code|https://github.com/apache/phoenix/blob/0c2008ddf32566c525df26cb94d60be32acc10da/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java#L930]

  was:SQL queries using the OR operator were taking a long time during the 
WHERE clause compilation phase when a large number of OR clauses (~50k) are 
used.


> Duplicate expression nodes in extract nodes during WHERE compilation phase 
> leads to poor performance.
> -
>
> Key: PHOENIX-6752
> URL: https://issues.apache.org/jira/browse/PHOENIX-6752
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0, 4.16.1, 5.2.0
>    Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Critical
> Fix For: 5.2.0
>
> Attachments: test-case.txt
>
>
> SQL queries using the OR operator were taking a long time during the WHERE 
> clause compilation phase when a large number of OR clauses (~50k) are used.
> The key observation was that during the AND/OR processing, when there are a 
> large number of OR expression nodes the same set of extracted nodes was 
> getting added.
> Thus bloating the set size and slowing down the processing.
> [code|https://github.com/apache/phoenix/blob/0c2008ddf32566c525df26cb94d60be32acc10da/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java#L930]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-6751) Force using range scan vs skip scan when using the IN operator and large number of RVC elements

2022-07-20 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-6751:


Assignee: Jacob Isaac

> Force using range scan vs skip scan when using the IN operator and large 
> number of RVC elements 
> 
>
> Key: PHOENIX-6751
> URL: https://issues.apache.org/jira/browse/PHOENIX-6751
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.1, 4.16.0, 5.2.0
>    Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Critical
> Fix For: 5.2.0, 5.1.3
>
>
> SQL queries using the IN operator using PKs of different SortOrder were 
> failing during the WHERE clause compilation phase and causing OOM issues on 
> the servers when a large number (~50k) of RVC elements were used in the IN 
> operator.
> SQL queries were failing specifically during the skip scan filter generation. 
> The skip scan filter is generated using a list of point key ranges. 
> [ScanRanges.create|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L80]
> The following getPointKeys 
> [code|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L541]
>  uses the KeyRange sets to create a new list of point-keys. When there are a 
> large number of RVC elements the above



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-6752) Duplicate expression nodes in extract nodes during WHERE compilation phase leads to poor performance.

2022-07-20 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-6752:


Assignee: Jacob Isaac

> Duplicate expression nodes in extract nodes during WHERE compilation phase 
> leads to poor performance.
> -
>
> Key: PHOENIX-6752
> URL: https://issues.apache.org/jira/browse/PHOENIX-6752
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0, 4.16.1, 5.2.0
>    Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Attachments: test-case.txt
>
>
> SQL queries using the OR operator were taking a long time during the WHERE 
> clause compilation phase when a large number of OR clauses (~50k) are used.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6751) Force using range scan vs skip scan when using the IN operator and large number of RVC elements

2022-07-20 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6751:
-
Attachment: (was: test-case.txt)

> Force using range scan vs skip scan when using the IN operator and large 
> number of RVC elements 
> 
>
> Key: PHOENIX-6751
> URL: https://issues.apache.org/jira/browse/PHOENIX-6751
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.1, 4.16.0, 5.2.0
>    Reporter: Jacob Isaac
>Priority: Critical
> Fix For: 5.2.0, 5.1.3
>
>
> SQL queries using the IN operator using PKs of different SortOrder were 
> failing during the WHERE clause compilation phase and causing OOM issues on 
> the servers when a large number (~50k) of RVC elements were used in the IN 
> operator.
> SQL queries were failing specifically during the skip scan filter generation. 
> The skip scan filter is generated using a list of point key ranges. 
> [ScanRanges.create|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L80]
> The following getPointKeys 
> [code|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L541]
>  uses the KeyRange sets to create a new list of point-keys. When there are a 
> large number of RVC elements the above



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6752) Duplicate expression nodes in extract nodes during WHERE compilation phase leads to poor performance.

2022-07-20 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6752:
-
Attachment: test-case.txt

> Duplicate expression nodes in extract nodes during WHERE compilation phase 
> leads to poor performance.
> -
>
> Key: PHOENIX-6752
> URL: https://issues.apache.org/jira/browse/PHOENIX-6752
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0, 4.16.1, 5.2.0
>    Reporter: Jacob Isaac
>Priority: Major
> Attachments: test-case.txt
>
>
> SQL queries using the OR operator were taking a long time during the WHERE 
> clause compilation phase when a large number of OR clauses (~50k) are used.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6751) Force using range scan vs skip scan when using the IN operator and large number of RVC elements

2022-07-20 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6751:
-
Attachment: test-case.txt

> Force using range scan vs skip scan when using the IN operator and large 
> number of RVC elements 
> 
>
> Key: PHOENIX-6751
> URL: https://issues.apache.org/jira/browse/PHOENIX-6751
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.1, 4.16.0, 5.2.0
>    Reporter: Jacob Isaac
>Priority: Critical
> Fix For: 5.2.0, 5.1.3
>
> Attachments: test-case.txt
>
>
> SQL queries using the IN operator using PKs of different SortOrder were 
> failing during the WHERE clause compilation phase and causing OOM issues on 
> the servers when a large number (~50k) of RVC elements were used in the IN 
> operator.
> SQL queries were failing specifically during the skip scan filter generation. 
> The skip scan filter is generated using a list of point key ranges. 
> [ScanRanges.create|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L80]
> The following getPointKeys 
> [code|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L541]
>  uses the KeyRange sets to create a new list of point-keys. When there are a 
> large number of RVC elements the above



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6752) Duplicate expression nodes in extract nodes during WHERE compilation phase leads to poor performance.

2022-07-20 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6752:


 Summary: Duplicate expression nodes in extract nodes during WHERE 
compilation phase leads to poor performance.
 Key: PHOENIX-6752
 URL: https://issues.apache.org/jira/browse/PHOENIX-6752
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.16.1, 5.1.0, 4.15.0, 5.2.0
Reporter: Jacob Isaac


SQL queries using the OR operator were taking a long time during the WHERE 
clause compilation phase when a large number of OR clauses (~50k) are used.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6751) Force using range scan vs skip scan when using the IN operator and large number of RVC elements

2022-07-20 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6751:
-
Description: 
SQL queries using the IN operator using PKs of different SortOrder were failing 
during the WHERE clause compilation phase and causing OOM issues on the servers 
when a large number (~50k) of RVC elements were used in the IN operator.

SQL queries were failing specifically during the skip scan filter generation. 
The skip scan filter is generated using a list of point key ranges. 
[ScanRanges.create|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L80]

The following getPointKeys 
[code|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L541]
 uses the KeyRange sets to create a new list of point-keys. When there are a 
large number of RVC elements the above

  was:
SQL queries using the IN operator using PKs of different SortOrder were failing 
during the WHERE clause compilation phase and causing OOM issues on the servers 
when a large number (~50k) of RVC elements were used in the IN operator.

SQL queries were failing specifically during the skip scan filter generation. 
The skip scan filter is generated using a list of point key 
ranges.[ScanRanges.create|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L80]

The following getPointKeys 
[code|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L541]
 uses the KeyRange sets to create a new list of point-keys. When there are a 
large number of RVC elements the above


> Force using range scan vs skip scan when using the IN operator and large 
> number of RVC elements 
> 
>
> Key: PHOENIX-6751
> URL: https://issues.apache.org/jira/browse/PHOENIX-6751
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.1, 4.16.0, 5.2.0
>    Reporter: Jacob Isaac
>Priority: Major
>
> SQL queries using the IN operator using PKs of different SortOrder were 
> failing during the WHERE clause compilation phase and causing OOM issues on 
> the servers when a large number (~50k) of RVC elements were used in the IN 
> operator.
> SQL queries were failing specifically during the skip scan filter generation. 
> The skip scan filter is generated using a list of point key ranges. 
> [ScanRanges.create|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L80]
> The following getPointKeys 
> [code|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L541]
>  uses the KeyRange sets to create a new list of point-keys. When there are a 
> large number of RVC elements the above



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6751) Force using range scan vs skip scan when using the IN operator and large number of RVC elements

2022-07-20 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6751:


 Summary: Force using range scan vs skip scan when using the IN 
operator and large number of RVC elements 
 Key: PHOENIX-6751
 URL: https://issues.apache.org/jira/browse/PHOENIX-6751
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.16.0, 5.1.1, 4.15.0, 5.2.0
Reporter: Jacob Isaac


SQL queries using the IN operator using PKs of different SortOrder were failing 
during the WHERE clause compilation phase and causing OOM issues on the servers 
when a large number (~50k) of RVC elements were used in the IN operator.

SQL queries were failing specifically during the skip scan filter generation. 
The skip scan filter is generated using a list of point key 
ranges.[ScanRanges.create|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L80]

The following getPointKeys 
[code|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L541]
 uses the KeyRange sets to create a new list of point-keys. When there are a 
large number of RVC elements the above



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [ANNOUNCE] Gokcen Iskender joins Phoenix PMC

2022-07-12 Thread Jacob Isaac
Congrats! Gokcen.

On Tue, Jul 12, 2022 at 11:48 AM Andrew Purtell  wrote:

> Congratulations Gokcen.
>
> On Tue, Jul 5, 2022 at 12:21 PM Geoffrey Jacoby 
> wrote:
>
> > On behalf of the Apache Phoenix PMC, I'm pleased to announce that Gokcen
> > Iskender
> > has accepted our invitation to join the PMC.
> >
> > Please join me in congratulating Gokcen!
> >
> > Thanks,
> >
> > Geoffrey Jacoby
> >
>
>
> --
> Best regards,
> Andrew
>
> Unrest, ignorance distilled, nihilistic imbeciles -
> It's what we’ve earned
> Welcome, apocalypse, what’s taken you so long?
> Bring us the fitting end that we’ve been counting on
>- A23, Welcome, Apocalypse
>


[jira] [Assigned] (PHOENIX-6688) Upgrade to phoenix 4.16 metadata upgrade fails when SYSCAT has large number of tenant views

2022-04-22 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-6688:


Assignee: Jacob Isaac

> Upgrade to phoenix 4.16 metadata upgrade fails when SYSCAT has large number 
> of tenant views
> ---
>
> Key: PHOENIX-6688
> URL: https://issues.apache.org/jira/browse/PHOENIX-6688
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.16.1, 4.17.0, 5.2.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
>
> Caused by: org.apache.phoenix.schema.MaxMutationSizeExceededException: ERROR 
> 729 (LIM01): MutationState size is bigger than maximum allowed number of 
> rows, try upserting rows in smaller batches or using autocommit on for 
> deletes.
> at 
> org.apache.phoenix.exception.SQLExceptionCode$21.newException(SQLExceptionCode.java:526)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:228)
> at 
> org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:191)
> at 
> org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:175)
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:142)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1341)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1280)
> at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:187)
> at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:93)
> at 
> org.apache.phoenix.compile.UpsertCompiler$ClientUpsertSelectMutationPlan.execute(UpsertCompiler.java:1409)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:414)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:395)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:383)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1885)
> at 
> org.apache.phoenix.util.UpgradeUtil.moveChildLinks(UpgradeUtil.java:1181)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemChildLink(ConnectionQueryServicesImpl.java:4055)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeOtherSystemTablesIfRequired(ConnectionQueryServicesImpl.java:4033)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemTables(ConnectionQueryServicesImpl.java:3958)
> at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.upgradeSystemTables(DelegateConnectionQueryServices.java:362)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExecuteUpgradeStatement$1.execute(PhoenixStatement.java:1445)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:414)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:395)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:383)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1866)



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (PHOENIX-6687) The region server hosting the SYSTEM.CATALOG fails to serve any metadata requests as default handler pool threads are exhausted.

2022-04-22 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-6687:


Assignee: Jacob Isaac

> The region server hosting the SYSTEM.CATALOG fails to serve any metadata 
> requests as default handler pool  threads are exhausted.
> -
>
> Key: PHOENIX-6687
> URL: https://issues.apache.org/jira/browse/PHOENIX-6687
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.0, 5.1.1, 4.16.1, 5.2.0, 5.1.2
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Fix For: 4.17.0, 5.2.0
>
> Attachments: stacktraces.txt
>
>
> When SYSTEM.CATALOG region server is restarted and the server is experiencing 
> heavy metadata call volume.
> The stack traces indicate that all the default handler pool threads are 
> waiting for the CQSI.init thread to finish initializing.
> The CQSI.init thread itself cannot proceed since it cannot complete the 
> second RPC call 
> (org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility)
>  due to thread starvation.
> For e.g
> The following 
> [code|https://github.com/apache/phoenix/blob/3cff97087d79b85e282fca4ac69ddf499fb1f40f/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L661]
>  turned the getTable(..) into needing an additional server-to-server RPC call 
> when initializing a PhoenixConnection (CQSI.init) for the first time on the 
> JVM. 
> It is well-known that server-to-server RPC calls are prone to deadlocking due 
> to thread pool exhaustion.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (PHOENIX-6688) Upgrade to phoenix 4.16 metadata upgrade fails when SYSCAT has large number of tenant views

2022-04-15 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6688:


 Summary: Upgrade to phoenix 4.16 metadata upgrade fails when 
SYSCAT has large number of tenant views
 Key: PHOENIX-6688
 URL: https://issues.apache.org/jira/browse/PHOENIX-6688
 Project: Phoenix
  Issue Type: Bug
  Components: core
Affects Versions: 4.16.1, 4.17.0, 5.2.0
Reporter: Jacob Isaac


Caused by: org.apache.phoenix.schema.MaxMutationSizeExceededException: ERROR 
729 (LIM01): MutationState size is bigger than maximum allowed number of rows, 
try upserting rows in smaller batches or using autocommit on for deletes.

at 
org.apache.phoenix.exception.SQLExceptionCode$21.newException(SQLExceptionCode.java:526)

at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:228)

at 
org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:191)

at 
org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:175)

at 
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:142)

at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1341)

at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1280)

at 
org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:187)

at 
org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:93)

at 
org.apache.phoenix.compile.UpsertCompiler$ClientUpsertSelectMutationPlan.execute(UpsertCompiler.java:1409)

at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:414)

at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)

at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)

at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:395)

at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:383)

at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1885)

at org.apache.phoenix.util.UpgradeUtil.moveChildLinks(UpgradeUtil.java:1181)

at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemChildLink(ConnectionQueryServicesImpl.java:4055)

at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeOtherSystemTablesIfRequired(ConnectionQueryServicesImpl.java:4033)

at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemTables(ConnectionQueryServicesImpl.java:3958)

at 
org.apache.phoenix.query.DelegateConnectionQueryServices.upgradeSystemTables(DelegateConnectionQueryServices.java:362)

at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExecuteUpgradeStatement$1.execute(PhoenixStatement.java:1445)

at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:414)

at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)

at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)

at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:395)

at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:383)

at 
org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1866)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6687) The region server hosting the SYSTEM.CATALOG fails to serve any metadata requests as default handler pool threads are exhausted.

2022-04-15 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6687:
-
Summary: The region server hosting the SYSTEM.CATALOG fails to serve any 
metadata requests as default handler pool  threads are exhausted.  (was: The 
region server hosting the SYSTEM.CATALOG fails to serve any metadata requests 
as handler pools are exhausted.)

> The region server hosting the SYSTEM.CATALOG fails to serve any metadata 
> requests as default handler pool  threads are exhausted.
> -
>
> Key: PHOENIX-6687
> URL: https://issues.apache.org/jira/browse/PHOENIX-6687
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.16.1, 5.2.0
>Reporter: Jacob Isaac
>Priority: Major
> Fix For: 4.17.0, 5.2.0
>
> Attachments: stacktraces.txt
>
>
> When SYSTEM.CATALOG region server is restarted and the server is experiencing 
> heavy metadata call volume.
> The stack traces indicate that all the default handler pool threads are 
> waiting for the CQSI.init thread to finish initializing.
> The CQSI.init thread itself cannot proceed since it cannot complete the 
> second RPC call 
> (org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility)
>  due to thread starvation.
> For e.g
> The following 
> [code|https://github.com/apache/phoenix/blob/3cff97087d79b85e282fca4ac69ddf499fb1f40f/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L661]
>  turned the getTable(..) into needing an additional server-to-server RPC call 
> when initializing a PhoenixConnection (CQSI.init) for the first time on the 
> JVM. 
> It is well-known that server-to-server RPC calls are prone to deadlocking due 
> to thread pool exhaustion.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6687) The region server hosting the SYSTEM.CATALOG fails to serve any metadata requests as handler pools are exhausted.

2022-04-15 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6687:
-
Attachment: stacktraces.txt

> The region server hosting the SYSTEM.CATALOG fails to serve any metadata 
> requests as handler pools are exhausted.
> -
>
> Key: PHOENIX-6687
> URL: https://issues.apache.org/jira/browse/PHOENIX-6687
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.16.1, 5.2.0
>Reporter: Jacob Isaac
>Priority: Major
> Fix For: 4.17.0, 5.2.0
>
> Attachments: stacktraces.txt
>
>
> When SYSTEM.CATALOG region server is restarted and the server is experiencing 
> heavy metadata call volume.
> The stack traces indicate that all the default handler pool threads are 
> waiting for the CQSI.init thread to finish initializing.
> The CQSI.init thread itself cannot proceed since it cannot complete the 
> second RPC call 
> (org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility)
>  due to thread starvation.
> For e.g
> The following 
> [code|https://github.com/apache/phoenix/blob/3cff97087d79b85e282fca4ac69ddf499fb1f40f/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L661]
>  turned the getTable(..) into needing an additional server-to-server RPC call 
> when initializing a PhoenixConnection (CQSI.init) for the first time on the 
> JVM. 
> It is well-known that server-to-server RPC calls are prone to deadlocking due 
> to thread pool exhaustion.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (PHOENIX-6687) The region server hosting the SYSTEM.CATALOG fails to serve any metadata requests as handler pools are exhausted.

2022-04-15 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6687:


 Summary: The region server hosting the SYSTEM.CATALOG fails to 
serve any metadata requests as handler pools are exhausted.
 Key: PHOENIX-6687
 URL: https://issues.apache.org/jira/browse/PHOENIX-6687
 Project: Phoenix
  Issue Type: Bug
  Components: core
Affects Versions: 4.16.1, 5.2.0
Reporter: Jacob Isaac
 Fix For: 4.17.0, 5.2.0


When SYSTEM.CATALOG region server is restarted and the server is experiencing 
heavy metadata call volume.

The stack traces indicate that all the default handler pool threads are waiting 
for the CQSI.init thread to finish initializing.
The CQSI.init thread itself cannot proceed since it cannot complete the second 
RPC call 
(org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility)
 due to thread starvation.

For e.g
The following 
[code|https://github.com/apache/phoenix/blob/3cff97087d79b85e282fca4ac69ddf499fb1f40f/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L661]
 turned the getTable(..) into needing an additional server-to-server RPC call 
when initializing a PhoenixConnection (CQSI.init) for the first time on the 
JVM. 
It is well-known that server-to-server RPC calls are prone to deadlocking due 
to thread pool exhaustion.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (PHOENIX-6530) Fix tenantId generation for Sequential and Uniform load generators

2021-08-18 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6530:
-
Fix Version/s: 5.1.3
Affects Version/s: 5.1.2

> Fix tenantId generation for Sequential and Uniform load generators
> --
>
> Key: PHOENIX-6530
> URL: https://issues.apache.org/jira/browse/PHOENIX-6530
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.17.0, 5.1.2
>    Reporter: Jacob Isaac
>Priority: Major
> Fix For: 4.17.0, 5.1.3
>
>
> While running the perf workloads for 4.16, found that tenantId generation for 
> the various generators do not match.
> As result the read queries fail when the writes/data was created using 
> different generator.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6530) Fix tenantId generation for Sequential and Uniform load generators

2021-08-18 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6530:


 Summary: Fix tenantId generation for Sequential and Uniform load 
generators
 Key: PHOENIX-6530
 URL: https://issues.apache.org/jira/browse/PHOENIX-6530
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.17.0
Reporter: Jacob Isaac
 Fix For: 4.17.0


While running the perf workloads for 4.16, found that tenantId generation for 
the various generators do not match.

As result the read queries fail when the writes/data was created using 
different generator.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [Announce] New Phoenix PMC member : Viraj Jasani

2021-06-18 Thread Jacob Isaac
Congrats Viraj!!

~Jacob

On Fri, Jun 18, 2021 at 1:26 PM Ankit Singhal 
wrote:

> On behalf of the Apache Phoenix PMC, I'm pleased to announce that Viraj
> Jasani
> has accepted our invitation to join the PMC.
>
> Please join me in congratulating Viraj.
>
> Thanks,
> Ankit Singhal
>


Re: [ANNOUNCE] New Committer Jacob Isaac

2021-06-06 Thread Jacob Isaac
Thanks Sukumar and Ankit for your warm welcome!

Jacob

On Sat, Jun 5, 2021 at 10:12 AM Ankit Singhal 
wrote:

> Congratulations and welcome !! Jacob
>
> On Fri, Jun 4, 2021 at 8:27 AM Sukumar Maddineni
>  wrote:
>
> > Oh wow congrats Jacob and keep it up.
> >
> > -
> > Sukumar
> >
> > On Thu, Jun 3, 2021 at 10:22 PM Jacob Isaac 
> > wrote:
> >
> > > Hi Everybody
> > >
> > > Thank you for warm welcome.
> > > Happy to be part of the team.
> > >
> > > ~Jacob
> > >
> > >
> > >
> > > On Thu, Jun 3, 2021 at 10:12 AM Andrew Purtell 
> > > wrote:
> > >
> > > > Congratulations and welcome, Jacob.
> > > >
> > > > On Wed, Jun 2, 2021 at 8:17 PM Xinyi Yan 
> wrote:
> > > >
> > > > > On behalf of the Apache Phoenix PMC, I'm pleased to announce that
> > Jacob
> > > > > Isaac has accepted the PMC's invitation to become a committer on
> > Apache
> > > > > Phoenix.
> > > > >
> > > > > We appreciate all of the great contributions Jacob has made to the
> > > > > community thus far and we look forward to his continued
> involvement.
> > > > >
> > > > > Welcome, Jacob!
> > > > >
> > > > > Xinyi
> > > > >
> > > >
> > > >
> > > > --
> > > > Best regards,
> > > > Andrew
> > > >
> > > > Words like orphans lost among the crosstalk, meaning torn from
> truth's
> > > > decrepit hands
> > > >- A23, Crosstalk
> > > >
> > >
> >
> >
> > --
> >
> > <https://smart.salesforce.com/sig/smaddineni//us_mb/default/link.html>
> >
>


Re: [ANNOUNCE] New Committer Jacob Isaac

2021-06-03 Thread Jacob Isaac
Hi Everybody

Thank you for warm welcome.
Happy to be part of the team.

~Jacob



On Thu, Jun 3, 2021 at 10:12 AM Andrew Purtell  wrote:

> Congratulations and welcome, Jacob.
>
> On Wed, Jun 2, 2021 at 8:17 PM Xinyi Yan  wrote:
>
> > On behalf of the Apache Phoenix PMC, I'm pleased to announce that Jacob
> > Isaac has accepted the PMC's invitation to become a committer on Apache
> > Phoenix.
> >
> > We appreciate all of the great contributions Jacob has made to the
> > community thus far and we look forward to his continued involvement.
> >
> > Welcome, Jacob!
> >
> > Xinyi
> >
>
>
> --
> Best regards,
> Andrew
>
> Words like orphans lost among the crosstalk, meaning torn from truth's
> decrepit hands
>- A23, Crosstalk
>


Re: [DISCUSS] Separating client and server side code

2021-04-19 Thread Jacob Isaac
Thanks, Lars for bringing that old discussion back, I did briefly talk
about it with Istvan and Josh.
As you pointed out, Istvan's attempt to modularize based on dependencies is
a good first step towards that goal.


@Istvan Toth 
Will try and test this on our local clusters sometime this week and will
let you know.

Thanks
Jacob


On Sun, Apr 18, 2021 at 6:13 PM la...@apache.org  wrote:

> There is also another angle to look at. A long time ago I wrote this:
>
> "
> It seems Phoenix serves 4 distinct purposes:
> 1. Query parsing and compiling.
> 2. A type system
> 3. Query execution
> 4. Efficient HBase interface
>
> Each of these is useful by itself, but we do not expose these as stable
> interfaces.
> We have seen a lot of need to tie HBase into "higher level" service, such
> as Spark (and Presto, etc).
> I think we can get a long way if we separate at least #1 (SQL) from the
> rest #2, #3, and #4 (Typed HBase Interface - THI).
> Phoenix is used via SQL (#1), other tools such as Presto, Impala, Drill,
> Spark, etc, can interface efficiently with HBase via THI (#2, #3, and #4).
> "
>
> I still believe this is an additional useful demarcation for how to group
> the code. And coincided somewhat with server/client.
>
> Query parsing and the type system are client. Query execution and HBase
> interface are both client and server.
>
> -- Lars
>
> On Wednesday, April 14, 2021, 8:56:08 AM PDT, Istvan Toth <
> st...@apache.org> wrote:
>
>
>
>
>
> Jacob, Josh and me had a discussion about the topic.
>
> I'm attaching the dependency graph of the proposed modules
>
>
>
> On Fri, Apr 9, 2021 at 6:30 AM Istvan Toth  wrote:
> > The bulk of the changes I'm working on is indeed the separation of the
> client and the server side code.
> >
> > Separating the MR related classes, and the tools-specific code (main,
> options parsing, etc) makes sense to me, if we don't mind adding another
> module.
> >
> > In the first WIP iteration, I'm splitting out everything that depends on
> more than hbase-client into a "server" module.
> > Once that works I will look at splitting that further into a  real
> "server" and an "MR/tools" module.
> >
> >
> > My initial estimates about splitting the server side code were way too
> optimistic, we have to touch a lot of code to break circular dependencies
> between the client and server side. The changes are still quite trivial,
> but the patch is going to be huge and scary.
> >
> >
> > Tests are also going to be a problem, we're probably going to have to
> move most of them into the "server" or a separate "tests" module, as the
> MiniCluster tests depend on code from each module.
> >
> > The plan in PHOENIX-5483, and Lars's mail sounds good, but I think that
> it would be more about dividing the "client-side" module further.
> > (BTW I think that making the indexing engine available separately would
> also be a popular feature )
> >
> >
> >
> > On Fri, Apr 9, 2021 at 5:39 AM Daniel Wong  wrote:
> >> This is another project I am interested in as well as my group at
> >> Salesforce.  We have had some discussions internally on this but I
> wasn't
> >> aware of this specific Spark issue (We only allow phoenix access via
> spark
> >> by default).  I think the approaches outlined are a good initial step
> but
> >> we were also considering a larger breakup of phoenix-core.  I don't
> >> think the desire for the larger step should stop us from doing the
> initial
> >> ones Istavan and Josh proposed.  I think the high level plan makes sense
> >> but I might prefer a different name than phoenix-tools for the ones we
> want
> >> to be available to external libraries like phoenix-connectors.  Another
> >> possible alternative is to restructure maybe less invasively by making
> >> phoenix core like your proposed tools and making a phoenix-internal or
> >> similar for the future.
> >> One thing I was wondering was how much effort it was to split
> client/server
> >> through phoenix-core...  Lars layed out a good component view of phoenix
> >> whosethe first step might be PHOENIx-5483 but we could focus on highest
> >> level separation rather than bottom up.  However, even that thread
> linked
> >> there talks about a client-facing api which we can piggyback for this
> use.
> >> Say phoeinx-public-api or similar.
> >>
> >> On Wed, Apr 7, 2021 at 9:43 AM Jacob Isaac 
> wrote:
> >>
> >>> Hi Josh & Istvan
> >>>
&g

Re: [DISCUSS] Separating client and server side code

2021-04-07 Thread Jacob Isaac
Hi Josh & Istvan

Thanks Istvan for looking into this, I am also interested in solving this
problem,
Let me know how I can help?

Thanks
Jacob

On Wed, Apr 7, 2021 at 9:05 AM Josh Elser  wrote:

> Thanks for trying to tackle this sticky problem, Istvan. For the context
> of everyone else, the real-life problem Istvan is trying to fix is that
> you cannot run a Spark application with both HBase and Phoenix jars on
> the classpath.
>
> If I understand this correctly, it's that the HBase API signatures are
> different depending on whether we are "client side" or "server side"
> (within a RegionServer). Your comment on PHOENIX-6053 shows that
> (signatures on Table.java around Protobuf's Service class having shaded
> relocation vs. the original com.google.protobuf coordinates).
>
> I think the reason we have the monolithic phoenix-core is that we have
> so much logic which is executed on both the client and server side. For
> example, we may push a filter operation to the server-side or we many
> run it client-side. That's also why we have the "thin" phoenix-server
> Maven module which just re-packages phoenix-core.
>
> Is it possible that we change phoenix-server so that it contains the
> "server-side" code that we don't want to have using the HBase classes
> with thirdparty relocations, rather than introduce another new Maven
> module?
>
> Looking through your WIP PR too.
>
> On 4/7/21 1:10 AM, Istvan Toth wrote:
> > Hi!
> >
> > I've been working on getting Phoenix working with
> hbase-shaded-client.jar,
> > and I am finally getting traction.
> >
> > One of the issues that I encountered is that we are mixing client and
> > server side code in phoenix-core, and there's a
> > mutual interdependence between the two.
> >
> > Fixing this is not hard, as it's mostly about replacing .class.getName()
> s
> > with string constants, and moving around some inconveniently placed
> static
> > utility methods, and now I have a WIP version where the client side
> doesn't
> > depend on server classes.
> >
> > However, unless we change the project structure, and factor out the
> classes
> > that depend on server-side APIs, this will be extremely fragile, as any
> > change can (and will) re-introduce the circular dependency between the
> > classes.
> >
> > To solve this issue I propose the following:
> >
> > - clean up phoenix-core, so that only classes that depend only on
> > *hbase-client* (or at worst only on classes that are present in
> > *hbase-shaded-client*) remain. This should be 90+% of the code
> > - move all classes (mostly coprocessors and their support code) that
> use
> > the server API (*hbase-server* mostly) to a new module, say
> > phoenix-coprocessors (the phoenix-server module name is taken). This
> new
> > class depends on phoenix-core.
> > - move all classes that directly depend on MapReduce, and their
> main()
> > classes to the existing phoenix-tools module (which also depends on
> core)
> >
> > The separation would be primarily based on API use, at the first cut I'd
> be
> > fine with keeping all logic phoenix-core, and referencing that. We may or
> > may not want to move logic that is only used in coprocessors or tools,
> but
> > doesn't use the respective APIs to the new modules later.
> >
> > As for the main artifacts:
> >
> > - *phoenix-server.jar* would include code from all three classes.
> > - A newly added *phoenix-client-byo-shaded-hbase.jar *would include
> only
> > the code from cleaned-up phoenix-core
> > - Ideally, we'd remove the the tools and coprocessor code (and
> > dependencies) from the standard and embedded clients, and switch
> > documentation to use *phoenix-server* to run the MR tools, but this
> is
> > optional.
> >
> > I am tracking this work in PHOENIX-6053, which has a (currently working)
> > WIP patch attached.
> >
> > I think that this change would fit the pattern established by creating
> the
> > phoenix-tools module,
> > but as this is major change in project structure (even if the actual Java
> > changes are trivial),
> > I'd like to gather your input on this approach (please also speak up if
> you
> > agree).
> >
> > regards
> > Istvan
> >
>


Re: [ANNOUNCE] New Phoenix PMC Member: Xinyi Yan

2021-03-31 Thread Jacob Isaac
Congrats Xinyi !

On Wed, Mar 31, 2021 at 11:15 AM Kadir Ozdemir
 wrote:

> Congratulations Xinyi!
>
> On Wed, Mar 31, 2021 at 10:53 AM Viraj Jasani  wrote:
>
> > Congratulations Xinyi !!
> >
> > On Wed, 31 Mar 2021 at 11:02 PM, Chinmay Kulkarni <
> > chinmayskulka...@apache.org> wrote:
> >
> > > Each Apache project is governed by a Project Management Committee, or
> > PMC.
> > > On behalf of the Apache Phoenix PMC, I'm pleased to announce that Xinyi
> > Yan
> > > has accepted our invitation to join.
> > >
> > > Please join me in welcoming Xinyi!
> > >
> >
>


[jira] [Created] (PHOENIX-6432) Add support for additional load generators

2021-03-27 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6432:


 Summary: Add support for additional load generators
 Key: PHOENIX-6432
 URL: https://issues.apache.org/jira/browse/PHOENIX-6432
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6430) Add support for full row update for tables when no columns specfied in scenario

2021-03-27 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6430:
-
Summary: Add support for full row update for tables when no columns 
specfied in scenario  (was: Added support for full row update for tables when 
no columns specfied in scenario)

> Add support for full row update for tables when no columns specfied in 
> scenario
> ---
>
> Key: PHOENIX-6430
> URL: https://issues.apache.org/jira/browse/PHOENIX-6430
> Project: Phoenix
>  Issue Type: Sub-task
>    Reporter: Jacob Isaac
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6431) Add support for auto assigning pmfs

2021-03-27 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6431:
-
Summary: Add support for auto assigning pmfs  (was: Added support for auto 
assigning pmfs)

> Add support for auto assigning pmfs
> ---
>
> Key: PHOENIX-6431
> URL: https://issues.apache.org/jira/browse/PHOENIX-6431
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: Jacob Isaac
>Priority: Major
>
> When defining a load profile it may be convenient to not specify the 
> probability distribution weights at all times



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6431) Added support for auto assigning pmfs

2021-03-27 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6431:


 Summary: Added support for auto assigning pmfs
 Key: PHOENIX-6431
 URL: https://issues.apache.org/jira/browse/PHOENIX-6431
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac


When defining a load profile it may be convenient to not specify the 
probability distribution weights at all times



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6430) Added support for full row update for tables when no columns specfied in scenario

2021-03-26 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6430:


 Summary: Added support for full row update for tables when no 
columns specfied in scenario
 Key: PHOENIX-6430
 URL: https://issues.apache.org/jira/browse/PHOENIX-6430
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6429) Add support for global connections and sequential data generators

2021-03-26 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6429:


 Summary: Add support for global connections and sequential data 
generators
 Key: PHOENIX-6429
 URL: https://issues.apache.org/jira/browse/PHOENIX-6429
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac


We may at times want to upsert or query using global connections. 

Also add additional sequential data generators in addition to INTEGER and 
VARCHAR data types.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6417) Fix PHERF ITs that are failing in the local builds

2021-03-17 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6417:


 Summary: Fix PHERF ITs that are failing in the local builds
 Key: PHOENIX-6417
 URL: https://issues.apache.org/jira/browse/PHOENIX-6417
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac
Assignee: Jacob Isaac
 Fix For: 4.17.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6416) Ensure that PHERF ITs are enabled and run during builds

2021-03-17 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6416:


 Summary: Ensure that PHERF ITs are enabled and run during builds
 Key: PHOENIX-6416
 URL: https://issues.apache.org/jira/browse/PHOENIX-6416
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Jacob Isaac
Assignee: Jacob Isaac
 Fix For: 4.17.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6118) Multi Tenant Workloads using PHERF

2021-03-08 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6118:
-
Parent: PHOENIX-6406
Issue Type: Sub-task  (was: Improvement)

> Multi Tenant Workloads using PHERF
> --
>
> Key: PHOENIX-6118
> URL: https://issues.apache.org/jira/browse/PHOENIX-6118
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: Jacob Isaac
>    Assignee: Jacob Isaac
>Priority: Major
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Features like PHOENIX_TTL and Splittable SYSCAT need to be tested for a large 
> number of tenant views.
> In the absence of support for creating a large number of tenant views - Multi 
> leveled views dynamically and be able to query them in a generic framework, 
> the teams have to write custom logic to replay/run functional and perf 
> testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6406) PHERF Improvements

2021-03-08 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6406:


 Summary: PHERF Improvements
 Key: PHOENIX-6406
 URL: https://issues.apache.org/jira/browse/PHOENIX-6406
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.17.0
Reporter: Jacob Isaac
Assignee: Jacob Isaac


Features like PHOENIX_TTL and Splittable SYSCAT need to be tested for a large 
number of tenant views.

In general, during releases, we need to have a perf framework to assess 
improvements/regressions that were introduced as part of the release.
 * Support for Multi leveled views dynamically and be able to query them in a 
generic framework
 * Support for global vs tenant connection when running load.
 * Support for various load generators.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Getting exception when trying to use/integrate phoenix-client-hbase-1.6-4.16.0.jar in a multi-module project

2021-03-04 Thread Jacob Isaac
Hi Istvan

Wondering if you have faced similar errors?

When trying to use/integrate phoenix-client-hbase-1.6-4.16.0.jar in a
multi-module project which includes other jars/dependencies from hbase,
hadoop and more on the classpath
Getting the following error when trying to create a connection
I have narrowed it down to a couple of jars that are offending although not
completely sure why (perhaps class loading issues?)
hbase-client-.jar
hbase-protocol-.jar

Error: Can't find method newStub in
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService!
(state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException: Can't find method newStub
in org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService!
at
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:146)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility(ConnectionQueryServicesImpl.java:1593)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1390)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1854)
at
org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:3016)
at
org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1098)
at
org.apache.phoenix.compile.CreateTableCompiler$CreateTableMutationPlan.execute(CreateTableCompiler.java:384)
at
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:415)
at
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:397)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:396)
at
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:384)
at
org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1867)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:3195)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:3158)
at
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:3158)
at
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
at
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:142)
at
org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
at
sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
at sqlline.Commands.connect(Commands.java:1064)
at sqlline.Commands.connect(Commands.java:996)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
at sqlline.SqlLine.dispatch(SqlLine.java:803)
at sqlline.SqlLine.initArgs(SqlLine.java:588)
at sqlline.SqlLine.begin(SqlLine.java:656)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
Caused by: java.lang.IllegalArgumentException: Can't find method newStub in
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService!
at org.apache.hadoop.hbase.util.Methods.call(Methods.java:48)
at
org.apache.hadoop.hbase.protobuf.ProtobufUtil.newServiceStub(ProtobufUtil.java:1934)
at org.apache.hadoop.hbase.client.HTable$15.call(HTable.java:1769)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NoSuchMethodException:
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.newStub(com.google.protobuf.RpcChannel)
at java.lang.Class.getMethod(Class.java:1786)
at org.apache.hadoop.hbase.util.Methods.call(Methods.java:40)
... 6 more


[jira] [Assigned] (PHOENIX-6374) Publish perf workload results and analysis

2021-02-09 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-6374:


Assignee: Jacob Isaac

> Publish perf workload results and analysis
> --
>
> Key: PHOENIX-6374
> URL: https://issues.apache.org/jira/browse/PHOENIX-6374
> Project: Phoenix
>  Issue Type: Test
>Affects Versions: 4.x
>    Reporter: Jacob Isaac
>    Assignee: Jacob Isaac
>Priority: Major
>
> Ran perf workloads against 4.14.x, 4.15.x and 4.16RC1 build.
> The  results and observations are published here for review -
> https://docs.google.com/document/d/19QHG6vvdxwCNkT3nqu8N-ift_1OIn161pqtJx1UcXiY/edit#
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6374) Publish perf workload results and analysis

2021-02-09 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6374:


 Summary: Publish perf workload results and analysis
 Key: PHOENIX-6374
 URL: https://issues.apache.org/jira/browse/PHOENIX-6374
 Project: Phoenix
  Issue Type: Test
Affects Versions: 4.x
Reporter: Jacob Isaac


Ran perf workloads against 4.14.x, 4.15.x and 4.16RC1 build.

The  results and observations are published here for review -

https://docs.google.com/document/d/19QHG6vvdxwCNkT3nqu8N-ift_1OIn161pqtJx1UcXiY/edit#

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] Release of Apache Phoenix 4.16.0 RC2

2021-02-09 Thread Jacob Isaac
+1 (non-binding)

mvn clean install -DskipTests -Dhbase.profile=(1.6/1.3) (successful)

Tested various Upserts, Queries on various types of tables. (successful)
Ran pherf (successful)
./bin/pherf-standalone.py  -l -q -z localhost -schemaFile
<-schema-file-name>.sql -scenarioFile .xml (successful)

Nit: ./bin/phoenix_utils.py gives the following error
testjar:
Traceback (most recent call last):
  File "./bin/phoenix_utils.py", line 215, in 
print("phoenix_queryserver_jar:", phoenix_queryserver_jar)
NameError: name 'phoenix_queryserver_jar' is not defined


Ran the following MR jobs
/hbase/bin/hbase org.apache.phoenix.mapreduce.index.IndexTool -op
/tmp/indexing.log -v AFTER -dt  -it 
(successful)
/hbase/bin/hbase org.apache.hadoop.hbase.mapreduce.RowCounter 
(successful)

Had executed perf workloads using the RC1 build
The results and analysis can be found here -
https://docs.google.com/document/d/19QHG6vvdxwCNkT3nqu8N-ift_1OIn161pqtJx1UcXiY/edit#

Thanks
Jacob

On Mon, Feb 8, 2021 at 6:48 PM Ankit Singhal  wrote:

> +1 (binding)
>
>  * Download source and build - OK
>  * Ran some DDLs and DMLs on fresh cluster - OK
>  * Signatures and checksums for src and bin(1.3)- OK
>  * apache-rat:check - SUCCESS
>  * CHANGES and RELEASENOTES - OK
>  * Unit tests( mvn clean install -Dit.test=noITs, though code-coverage
> check failed for me) - Ok
>
> Regards,
> Ankit Singhal
>
> On Mon, Feb 8, 2021 at 1:40 PM Chinmay Kulkarni <
> chinmayskulka...@gmail.com>
> wrote:
>
> > +1 (Binding)
> >
> > Tested against hbase-1.3 and hbase-1.6
> >
> > * Build from source (mvn clean install -DskipTests
> > -Dhbase.profile=1.3/1.6): OK
> > * Green build: OK (thanks for triggering this Viraj)
> > * Did some basic DDL, queries, upserts, deletes and everything looked
> fine:
> > OK
> > * Did some upgrade testing: Create tables, views, indices from an old
> > client, query, upsert. Then upgrade to 4.16 metadata, query, upsert from
> an
> > old client, then upgrade the client and query, upsert from a new client:
> OK
> > * Verified checksums: OK
> > * Verified signatures: OK
> > * mvn clean apache-rat:check: OK
> >
> > On Sun, Feb 7, 2021 at 10:03 PM Viraj Jasani  wrote:
> >
> > > +1 (non-binding)
> > >
> > > Clean build:
> > >
> >
> https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/4.16/29/
> > >
> > > Tested against HBase-1.6 profile:
> > >
> > > * Checksum : ok
> > > * Rat check (1.8.0_171): ok
> > >  - mvn clean apache-rat:check
> > > * Built from source (1.8.0_171): ok
> > >  - mvn clean install  -DskipTests
> > > * Basic testing with mini cluster: ok
> > > * Unit tests pass (1.8.0_171): failed (passing eventually)
> > >  - mvn clean package  && mvn verify  -Dskip.embedded
> > >
> > >
> > > [ERROR] Tests run: 23, Failures: 0, Errors: 1, Skipped: 0, Time
> elapsed:
> > > 197.428 s <<< FAILURE! - in org.apache.phoenix.end2end.AggregateIT
> > > [ERROR]
> > >
> >
> testOrderByOptimizeForClientAggregatePlanBug4820(org.apache.phoenix.end2end.AggregateIT)
> > > Time elapsed: 9.055 s  <<< ERROR!
> > > java.lang.RuntimeException: java.lang.OutOfMemoryError: unable to
> create
> > > new native thread
> > > at
> > >
> >
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:239)
> > > at
> > >
> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:273)
> > > at
> > >
> >
> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:434)
> > > at
> > >
> org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:308)
> > >
> > >
> > > [ERROR] Tests run: 37, Failures: 0, Errors: 1, Skipped: 0, Time
> elapsed:
> > > 204.243 s <<< FAILURE! - in
> > org.apache.phoenix.end2end.ArrayAppendFunctionIT
> > > [ERROR]
> > >
> >
> testUpsertArrayAppendFunctionVarchar(org.apache.phoenix.end2end.ArrayAppendFunctionIT)
> > > Time elapsed: 4.286 s  <<< ERROR!
> > > org.apache.phoenix.exception.PhoenixIOException:
> > > org.apache.hadoop.hbase.DoNotRetryIOException: N65:
> > > java.lang.RuntimeException: java.lang.OutOfMemoryError: unable to
> create
> > > new native thread
> > > at
> > >
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:122)
> > > at
> > >
> >
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:2151)
> > > at
> > >
> >
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17317)
> > >
> > >
> > > [ERROR] Tests run: 28, Failures: 0, Errors: 1, Skipped: 0, Time
> elapsed:
> > > 147.854 s <<< FAILURE! - in
> > org.apache.phoenix.end2end.ArrayRemoveFunctionIT
> > > [ERROR]
> > >
> >
> testArrayRemoveFunctionWithNull(org.apache.phoenix.end2end.ArrayRemoveFunctionIT)
> > > Time elapsed: 2.519 s  <<< ERROR!
> > > org.apache.phoenix.exception.PhoenixIOException:
> > > java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError:
> > unable
> > > to create new native thread
> > 

Re: [VOTE] Release of Apache Phoenix 4.16.0 RC0

2021-01-28 Thread Jacob Isaac
As I started testing the builds found a dependency issue -
PHOENIX-6348

This will need to be addressed as this affects core functionality related
to running the IndexTool.

Thanks
Jacob


On 2021/01/28 05:09:44, Xinyi Yan  wrote:
> Hello Everyone,
>
> This is a call for a vote on Apache Phoenix 4.16.0 RC0. This is the next
> minor release of Phoenix 4, compatible with Apache HBase 1.3, 1.4, 1.5
> and 1.6.
>
> The VOTE will remain open for at least 72 hours.
>
> [ ] +1 Release this package as Apache phoenix 4.16.0
> [ ] -1 Do not release this package because ...
>
> The tag to be voted on is 4.16.0RC0
> https://github.com/apache/phoenix/tree/4.16.0RC0
>
> The release files, including signatures, digests, as well as CHANGES.md
> and RELEASENOTES.md included in this RC can be found at:
> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.16.0RC0/
>
> For a complete list of changes, see:
>
https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.16.0RC0/CHANGES.md
>
> Artifacts are signed with my "CODE SIGNING KEY":
> E4882DD3AB711587
>
> KEYS file available here:
> https://dist.apache.org/repos/dist/dev/phoenix/KEYS
>
>
> Thanks,
> Xinyi
>


[jira] [Created] (PHOENIX-6348) java.lang.NoClassDefFoundError: when running with hbase-1.6

2021-01-28 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6348:


 Summary: java.lang.NoClassDefFoundError: when running with 
hbase-1.6
 Key: PHOENIX-6348
 URL: https://issues.apache.org/jira/browse/PHOENIX-6348
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.16.0
Reporter: Jacob Isaac
 Fix For: 4.16.0


Getting this error, when running with hbase-1.6

I think this stems from the jar dependency mismatch between phoenix 4.x/4.16 
and hbase1.6

hbase-1.6 :  commons-cli-1.2.jar 
(https://github.com/apache/hbase/blob/5ec5a5b115ee36fb28903667c008218abd21b3f5/pom.xml#L1260)

phoenix 4.x : commons-cli-1.4.jar 
([https://github.com/apache/phoenix/blob/44d44029597d032af1be54d5e9a70342c1fe4769/pom.xml#L100)]

 

What is the best way to resolve this? Shading?

[~stoty] [~vjasani]

FYI

[~yanxinyi] [~ChinmayKulkarni] [~kadir]

 

**Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/commons/cli/DefaultParser
 at 
org.apache.phoenix.mapreduce.index.IndexTool.parseOptions(IndexTool.java:354)
 at org.apache.phoenix.mapreduce.index.IndexTool.run(IndexTool.java:788)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
 at org.apache.phoenix.mapreduce.index.IndexTool.main(IndexTool.java:1201)
Caused by: java.lang.ClassNotFoundException: 
org.apache.commons.cli.DefaultParser
 at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:357)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6341) Enable running IT tests from PHERF module during builds and patch checkins

2021-01-26 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6341:


 Summary: Enable running IT tests from PHERF module during builds 
and patch checkins
 Key: PHOENIX-6341
 URL: https://issues.apache.org/jira/browse/PHOENIX-6341
 Project: Phoenix
  Issue Type: Test
Affects Versions: 4.x
Reporter: Jacob Isaac
 Fix For: 4.x






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6339) Older client using aggregate queries shows incorrect results.

2021-01-25 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6339:
-
Issue Type: Bug  (was: Improvement)

> Older client using aggregate queries shows incorrect results.
> -
>
> Key: PHOENIX-6339
> URL: https://issues.apache.org/jira/browse/PHOENIX-6339
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.16.0
>    Reporter: Jacob Isaac
>Priority: Blocker
> Fix For: 4.16.0
>
>
> When running an older client for eg (4.15) against a 4.16 server
> The output of aggregate queries are incorrect -
> expected one row with the count, actual 9 rows with counts.
> The 9 rows correspond to the number of regions in the data set. As shown in 
> the explain plan.
> Connected to: Phoenix (version 4.15)
> Driver: PhoenixEmbeddedDriver (version 4.15)
> Autocommit status: true
> Transaction isolation: TRANSACTION_READ_COMMITTED
> Building list of tables and columns for tab-completion (set fastconnect to 
> true to skip)...
> 225/225 (100%) Done
> Done
> sqlline version 1.5.0
> 0: jdbc:phoenix:localhost> select count(*) from 
> BENCHMARK.BM_AGGREGATION_TABLE_2;
> +---+
> | COUNT(1) |
> +---+
> | 2389483 |
> | 2319177 |
> | 1958007 |
> | 2389483 |
> | 2319178 |
> | 1958005 |
> | 2233646 |
> | 2249033 |
> | 2183988 |
> +---+
> 9 rows selected (6.56 seconds)
> 0: jdbc:phoenix:localhost> explain select count(*) from 
> BENCHMARK.BM_AGGREGATION_TABLE_2;
> +---+-+++
> | PLAN | EST_BYTES_READ | EST_ROWS_READ | EST_INFO_TS |
> +---+-+++
> | CLIENT 9-CHUNK 10191406 ROWS 1887436990 BYTES PARALLEL 1-WAY FULL SCAN OVER 
> BENCHMARK.BM_AGGREGATION_TABLE_2 | 1887436990 | 10191406 | 1611584394492 |
> | SERVER FILTER BY FIRST KEY ONLY | 1887436990 | 10191406 | 1611584394492 |
> | SERVER AGGREGATE INTO SINGLE ROW | 1887436990 | 10191406 | 1611584394492 |
> +---+-+++



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6339) Older client using aggregate queries shows incorrect results.

2021-01-25 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6339:


 Summary: Older client using aggregate queries shows incorrect 
results.
 Key: PHOENIX-6339
 URL: https://issues.apache.org/jira/browse/PHOENIX-6339
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.16.0
Reporter: Jacob Isaac
 Fix For: 4.16.0


When running an older client for eg (4.15) against a 4.16 server
The output of aggregate queries are incorrect -

expected one row with the count, actual 9 rows with counts.

The 9 rows correspond to the number of regions in the data set. As shown in the 
explain plan.

Connected to: Phoenix (version 4.15)
Driver: PhoenixEmbeddedDriver (version 4.15)
Autocommit status: true
Transaction isolation: TRANSACTION_READ_COMMITTED
Building list of tables and columns for tab-completion (set fastconnect to true 
to skip)...
225/225 (100%) Done
Done
sqlline version 1.5.0
0: jdbc:phoenix:localhost> select count(*) from 
BENCHMARK.BM_AGGREGATION_TABLE_2;
+---+
| COUNT(1) |
+---+
| 2389483 |
| 2319177 |
| 1958007 |
| 2389483 |
| 2319178 |
| 1958005 |
| 2233646 |
| 2249033 |
| 2183988 |
+---+
9 rows selected (6.56 seconds)
0: jdbc:phoenix:localhost> explain select count(*) from 
BENCHMARK.BM_AGGREGATION_TABLE_2;
+---+-+++
| PLAN | EST_BYTES_READ | EST_ROWS_READ | EST_INFO_TS |
+---+-+++
| CLIENT 9-CHUNK 10191406 ROWS 1887436990 BYTES PARALLEL 1-WAY FULL SCAN OVER 
BENCHMARK.BM_AGGREGATION_TABLE_2 | 1887436990 | 10191406 | 1611584394492 |
| SERVER FILTER BY FIRST KEY ONLY | 1887436990 | 10191406 | 1611584394492 |
| SERVER AGGREGATE INTO SINGLE ROW | 1887436990 | 10191406 | 1611584394492 |
+---+-+++



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6312) Need a util method in PhoenixMapReduceUtil along the lines of TableMapReduceUtil.addHBaseDependencyJars

2021-01-12 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6312:
-
Fix Version/s: 4.16.0

> Need a util method in PhoenixMapReduceUtil along the lines of 
> TableMapReduceUtil.addHBaseDependencyJars
> ---
>
> Key: PHOENIX-6312
> URL: https://issues.apache.org/jira/browse/PHOENIX-6312
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.x
>    Reporter: Jacob Isaac
>Priority: Blocker
> Fix For: 4.16.0, 4.x
>
>
> Now that we have phoenix-hbase-compat-x-x-x jars, We need to have the classes 
> in the compat jar made available to the MR jobs.
> TableMapReduceUtil.addHBaseDependencyJars is an example of how hbase 
> dependency jars are made available to the MR job.
> We get the following exception when these jars are not made available to MR 
> jobs
> Error: java.lang.ClassNotFoundException: 
> org.apache.phoenix.compat.hbase.CompatRpcControllerFactory at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:381) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:424) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:357) at 
> java.lang.ClassLoader.defineClass1(Native Method) at 
> java.lang.ClassLoader.defineClass(ClassLoader.java:763) at 
> java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at 
> java.net.URLClassLoader.defineClass(URLClassLoader.java:467) at 
> java.net.URLClassLoader.access$100(URLClassLoader.java:73) at 
> java.net.URLClassLoader$1.run(URLClassLoader.java:368) at 
> java.net.URLClassLoader$1.run(URLClassLoader.java:362) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:361) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:424) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:357) at 
> org.apache.phoenix.query.QueryServicesOptions.(QueryServicesOptions.java:288)
>  at 
> org.apache.phoenix.query.QueryServicesImpl.(QueryServicesImpl.java:36) 
> at 
> org.apache.phoenix.jdbc.PhoenixDriver.getQueryServices(PhoenixDriver.java:197)
>  at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:235)
>  at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:142)
>  at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221) at 
> java.sql.DriverManager.getConnection(DriverManager.java:664) at 
> java.sql.DriverManager.getConnection(DriverManager.java:208) at 
> org.apache.phoenix.mapreduce.util.ConnectionUtil.getConnection(ConnectionUtil.java:113)
>  at 
> org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(ConnectionUtil.java:58)
>  at 
> org.apache.phoenix.mapreduce.PhoenixServerBuildIndexInputFormat.getQueryPlan(PhoenixServerBuildIndexInputFormat.java:94)
>  at 
> org.apache.phoenix.mapreduce.PhoenixInputFormat.createRecordReader(PhoenixInputFormat.java:79)
>  at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.(MapTask.java:521)
>  at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) at 
> org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at 
> org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:177) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926)
>  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:171)
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6312) Need a util method in PhoenixMapReduceUtil along the lines of TableMapReduceUtil.addHBaseDependencyJars

2021-01-12 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6312:


 Summary: Need a util method in PhoenixMapReduceUtil along the 
lines of TableMapReduceUtil.addHBaseDependencyJars
 Key: PHOENIX-6312
 URL: https://issues.apache.org/jira/browse/PHOENIX-6312
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.x
Reporter: Jacob Isaac
 Fix For: 4.x


Now that we have phoenix-hbase-compat-x-x-x jars, We need to have the classes 
in the compat jar made available to the MR jobs.

TableMapReduceUtil.addHBaseDependencyJars is an example of how hbase dependency 
jars are made available to the MR job.

We get the following exception when these jars are not made available to MR jobs
Error: java.lang.ClassNotFoundException: 
org.apache.phoenix.compat.hbase.CompatRpcControllerFactory at 
java.net.URLClassLoader.findClass(URLClassLoader.java:381) at 
java.lang.ClassLoader.loadClass(ClassLoader.java:424) at 
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at 
java.lang.ClassLoader.loadClass(ClassLoader.java:357) at 
java.lang.ClassLoader.defineClass1(Native Method) at 
java.lang.ClassLoader.defineClass(ClassLoader.java:763) at 
java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at 
java.net.URLClassLoader.defineClass(URLClassLoader.java:467) at 
java.net.URLClassLoader.access$100(URLClassLoader.java:73) at 
java.net.URLClassLoader$1.run(URLClassLoader.java:368) at 
java.net.URLClassLoader$1.run(URLClassLoader.java:362) at 
java.security.AccessController.doPrivileged(Native Method) at 
java.net.URLClassLoader.findClass(URLClassLoader.java:361) at 
java.lang.ClassLoader.loadClass(ClassLoader.java:424) at 
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at 
java.lang.ClassLoader.loadClass(ClassLoader.java:357) at 
org.apache.phoenix.query.QueryServicesOptions.(QueryServicesOptions.java:288)
 at 
org.apache.phoenix.query.QueryServicesImpl.(QueryServicesImpl.java:36) at 
org.apache.phoenix.jdbc.PhoenixDriver.getQueryServices(PhoenixDriver.java:197) 
at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:235)
 at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:142)
 at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221) at 
java.sql.DriverManager.getConnection(DriverManager.java:664) at 
java.sql.DriverManager.getConnection(DriverManager.java:208) at 
org.apache.phoenix.mapreduce.util.ConnectionUtil.getConnection(ConnectionUtil.java:113)
 at 
org.apache.phoenix.mapreduce.util.ConnectionUtil.getInputConnection(ConnectionUtil.java:58)
 at 
org.apache.phoenix.mapreduce.PhoenixServerBuildIndexInputFormat.getQueryPlan(PhoenixServerBuildIndexInputFormat.java:94)
 at 
org.apache.phoenix.mapreduce.PhoenixInputFormat.createRecordReader(PhoenixInputFormat.java:79)
 at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.(MapTask.java:521)
 at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764) at 
org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at 
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:177) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1926)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:171)
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5601) PHOENIX-5601 Add a new coprocessor for PHOENIX_TTL - PhoenixTTLRegionObserver

2020-11-17 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-5601:
-
Attachment: PHOENIX-5601.master.001.patch

> PHOENIX-5601 Add a new coprocessor for PHOENIX_TTL - PhoenixTTLRegionObserver
> -
>
> Key: PHOENIX-5601
> URL: https://issues.apache.org/jira/browse/PHOENIX-5601
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.1.0, 4.16.0
>    Reporter: Jacob Isaac
>    Assignee: Jacob Isaac
>Priority: Major
> Attachments: PHOENIX-5601.4.x.003.patch, PHOENIX-5601.master.001.patch
>
>
>  * Add a New coprocessor - ViewTTLAware Coprocessor that will intercept 
> scan/get requests to inject a new ViewTTLAware scanner.
> The scanner will -
>   * Use the row timestamp of the empty column to determine whether row TTL 
> has expired  and mask the rows from underlying query results.
>   * Use the row timestamp to delete expired rows when DELETE_VIEW_TTL_EXPIRED 
> flag is present.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5601) PHOENIX-5601 Add a new coprocessor for PHOENIX_TTL - PhoenixTTLRegionObserver

2020-11-16 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-5601:
-
Attachment: (was: PHOENIX-5601.4.x.002.patch)

> PHOENIX-5601 Add a new coprocessor for PHOENIX_TTL - PhoenixTTLRegionObserver
> -
>
> Key: PHOENIX-5601
> URL: https://issues.apache.org/jira/browse/PHOENIX-5601
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.1.0, 4.16.0
>    Reporter: Jacob Isaac
>    Assignee: Jacob Isaac
>Priority: Major
>
>  * Add a New coprocessor - ViewTTLAware Coprocessor that will intercept 
> scan/get requests to inject a new ViewTTLAware scanner.
> The scanner will -
>   * Use the row timestamp of the empty column to determine whether row TTL 
> has expired  and mask the rows from underlying query results.
>   * Use the row timestamp to delete expired rows when DELETE_VIEW_TTL_EXPIRED 
> flag is present.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5601) PHOENIX-5601 Add a new coprocessor for PHOENIX_TTL - PhoenixTTLRegionObserver

2020-11-16 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-5601:
-
Summary: PHOENIX-5601 Add a new coprocessor for PHOENIX_TTL - 
PhoenixTTLRegionObserver  (was: Add a new Coprocessor - ViewTTLAware 
Coprocessor)

> PHOENIX-5601 Add a new coprocessor for PHOENIX_TTL - PhoenixTTLRegionObserver
> -
>
> Key: PHOENIX-5601
> URL: https://issues.apache.org/jira/browse/PHOENIX-5601
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.1.0, 4.16.0
>    Reporter: Jacob Isaac
>    Assignee: Jacob Isaac
>Priority: Major
>
>  * Add a New coprocessor - ViewTTLAware Coprocessor that will intercept 
> scan/get requests to inject a new ViewTTLAware scanner.
> The scanner will -
>   * Use the row timestamp of the empty column to determine whether row TTL 
> has expired  and mask the rows from underlying query results.
>   * Use the row timestamp to delete expired rows when DELETE_VIEW_TTL_EXPIRED 
> flag is present.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5601) PHOENIX-5601 Add a new coprocessor for PHOENIX_TTL - PhoenixTTLRegionObserver

2020-11-16 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-5601:
-
Attachment: (was: PHOENIX-5601.4.x.001.patch)

> PHOENIX-5601 Add a new coprocessor for PHOENIX_TTL - PhoenixTTLRegionObserver
> -
>
> Key: PHOENIX-5601
> URL: https://issues.apache.org/jira/browse/PHOENIX-5601
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.1.0, 4.16.0
>    Reporter: Jacob Isaac
>    Assignee: Jacob Isaac
>Priority: Major
>
>  * Add a New coprocessor - ViewTTLAware Coprocessor that will intercept 
> scan/get requests to inject a new ViewTTLAware scanner.
> The scanner will -
>   * Use the row timestamp of the empty column to determine whether row TTL 
> has expired  and mask the rows from underlying query results.
>   * Use the row timestamp to delete expired rows when DELETE_VIEW_TTL_EXPIRED 
> flag is present.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5601) Add a new Coprocessor - ViewTTLAware Coprocessor

2020-11-16 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-5601:
-
Attachment: (was: PHOENIX-5601.master.008.patch)

> Add a new Coprocessor - ViewTTLAware Coprocessor
> 
>
> Key: PHOENIX-5601
> URL: https://issues.apache.org/jira/browse/PHOENIX-5601
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.1.0, 4.16.0
>    Reporter: Jacob Isaac
>    Assignee: Jacob Isaac
>Priority: Major
>
>  * Add a New coprocessor - ViewTTLAware Coprocessor that will intercept 
> scan/get requests to inject a new ViewTTLAware scanner.
> The scanner will -
>   * Use the row timestamp of the empty column to determine whether row TTL 
> has expired  and mask the rows from underlying query results.
>   * Use the row timestamp to delete expired rows when DELETE_VIEW_TTL_EXPIRED 
> flag is present.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5601) Add a new Coprocessor - ViewTTLAware Coprocessor

2020-11-16 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-5601:
-
Attachment: (was: PHOENIX-5601.4.x-HBase-1.3.008.patch)

> Add a new Coprocessor - ViewTTLAware Coprocessor
> 
>
> Key: PHOENIX-5601
> URL: https://issues.apache.org/jira/browse/PHOENIX-5601
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.1.0, 4.16.0
>    Reporter: Jacob Isaac
>    Assignee: Jacob Isaac
>Priority: Major
>
>  * Add a New coprocessor - ViewTTLAware Coprocessor that will intercept 
> scan/get requests to inject a new ViewTTLAware scanner.
> The scanner will -
>   * Use the row timestamp of the empty column to determine whether row TTL 
> has expired  and mask the rows from underlying query results.
>   * Use the row timestamp to delete expired rows when DELETE_VIEW_TTL_EXPIRED 
> flag is present.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (PHOENIX-5601) Add a new Coprocessor - ViewTTLAware Coprocessor

2020-11-11 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reopened PHOENIX-5601:
--

After discussion with [~kozdemir] [~larsh] and others, we arrived at the 
following decision
 # The client-side masking may not handle all use cases, for eg server-side 
scans 
 # As the long term goal is to extend this to Phoenix Tables too, using a 
co-proc might be more efficient and may be easier to manage dependencies with 
other backend processes like backups, compaction ...

More details in this design doc - PHOENIX-5934

> Add a new Coprocessor - ViewTTLAware Coprocessor
> 
>
> Key: PHOENIX-5601
> URL: https://issues.apache.org/jira/browse/PHOENIX-5601
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.1.0, 4.16.0
>    Reporter: Jacob Isaac
>    Assignee: Jacob Isaac
>Priority: Major
> Attachments: PHOENIX-5601.4.x-HBase-1.3.008.patch, 
> PHOENIX-5601.master.008.patch
>
>
>  * Add a New coprocessor - ViewTTLAware Coprocessor that will intercept 
> scan/get requests to inject a new ViewTTLAware scanner.
> The scanner will -
>   * Use the row timestamp of the empty column to determine whether row TTL 
> has expired  and mask the rows from underlying query results.
>   * Use the row timestamp to delete expired rows when DELETE_VIEW_TTL_EXPIRED 
> flag is present.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5601) Add a new Coprocessor - ViewTTLAware Coprocessor

2020-11-11 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-5601:
-
Affects Version/s: (was: 4.15.0)
   4.16.0

> Add a new Coprocessor - ViewTTLAware Coprocessor
> 
>
> Key: PHOENIX-5601
> URL: https://issues.apache.org/jira/browse/PHOENIX-5601
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.1.0, 4.16.0
>    Reporter: Jacob Isaac
>    Assignee: Jacob Isaac
>Priority: Major
> Attachments: PHOENIX-5601.4.x-HBase-1.3.008.patch, 
> PHOENIX-5601.master.008.patch
>
>
>  * Add a New coprocessor - ViewTTLAware Coprocessor that will intercept 
> scan/get requests to inject a new ViewTTLAware scanner.
> The scanner will -
>   * Use the row timestamp of the empty column to determine whether row TTL 
> has expired  and mask the rows from underlying query results.
>   * Use the row timestamp to delete expired rows when DELETE_VIEW_TTL_EXPIRED 
> flag is present.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6171) Child views should not be allowed to override the parent view PHOENIX_TTL attribute.

2020-10-28 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6171:
-
Attachment: (was: PHOENIX-6171.4.x.002.patch)

> Child views should not be allowed to override the parent view PHOENIX_TTL 
> attribute.
> 
>
> Key: PHOENIX-6171
> URL: https://issues.apache.org/jira/browse/PHOENIX-6171
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.15.0, 4.x
>    Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Fix For: 4.16.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6171) Child views should not be allowed to override the parent view PHOENIX_TTL attribute.

2020-10-25 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6171:
-
Attachment: (was: PHOENIX-6171.4.x.001.patch)

> Child views should not be allowed to override the parent view PHOENIX_TTL 
> attribute.
> 
>
> Key: PHOENIX-6171
> URL: https://issues.apache.org/jira/browse/PHOENIX-6171
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.x
>    Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Attachments: PHOENIX-6171.4.x.002.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6171) Child views should not be allowed to override the parent view PHOENIX_TTL attribute.

2020-10-25 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6171:
-
Attachment: PHOENIX-6171.4.x.002.patch

> Child views should not be allowed to override the parent view PHOENIX_TTL 
> attribute.
> 
>
> Key: PHOENIX-6171
> URL: https://issues.apache.org/jira/browse/PHOENIX-6171
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.x
>    Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Attachments: PHOENIX-6171.4.x.002.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6179) Relax the MaxLookBack age checks during an upgrade

2020-10-06 Thread Jacob Isaac (Jira)
Jacob Isaac created PHOENIX-6179:


 Summary: Relax the MaxLookBack age checks during an upgrade
 Key: PHOENIX-6179
 URL: https://issues.apache.org/jira/browse/PHOENIX-6179
 Project: Phoenix
  Issue Type: Bug
Reporter: Jacob Isaac


Getting this error when trying to upgrade cluster - Error: ERROR 538 (42915): 
Cannot use SCN to look further back in the past beyond the configured max 
lookback age (state=42915,code=538)

 

During the upgrade the SCN for the connection is set to be the phoenix version 
timestamp which is a small number.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6170) PHOENIX_TTL spec should be in seconds instead of milliseconds

2020-10-05 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac updated PHOENIX-6170:
-
Summary: PHOENIX_TTL spec should be in seconds instead of milliseconds  
(was: PHOENIX_TTL spec should in seconds instead of milliseconds)

> PHOENIX_TTL spec should be in seconds instead of milliseconds
> -
>
> Key: PHOENIX-6170
> URL: https://issues.apache.org/jira/browse/PHOENIX-6170
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: Jacob Isaac
>    Assignee: Jacob Isaac
>Priority: Major
>
> When defining the PHOENIX_TTL spec it should be specified in seconds, which 
> is also how HBase TTL value is set.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6170) PHOENIX_TTL spec should in seconds instead of milliseconds

2020-10-01 Thread Jacob Isaac (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jacob Isaac reassigned PHOENIX-6170:


Assignee: Jacob Isaac

> PHOENIX_TTL spec should in seconds instead of milliseconds
> --
>
> Key: PHOENIX-6170
> URL: https://issues.apache.org/jira/browse/PHOENIX-6170
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: Jacob Isaac
>    Assignee: Jacob Isaac
>Priority: Major
>
> When defining the PHOENIX_TTL spec it should be specified in seconds, which 
> is also how HBase TTL value is set.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >