[jira] [Updated] (PHOENIX-5478) IndexTool mapper task should not timeout

2019-10-22 Thread Kadir OZDEMIR (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kadir OZDEMIR updated PHOENIX-5478:
---
Attachment: PHOENIX-5478.master.003.patch

> IndexTool mapper task should not timeout 
> -
>
> Key: PHOENIX-5478
> URL: https://issues.apache.org/jira/browse/PHOENIX-5478
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Kadir OZDEMIR
>Assignee: Kadir OZDEMIR
>Priority: Major
> Attachments: PHOENIX-5478.master.001.patch, 
> PHOENIX-5478.master.002.patch, PHOENIX-5478.master.003.patch
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> In the old design, the IndexTool MR job mapper first scanned the data table 
> rows one by one using a Phoenix client code, then constructed the index rows 
> and finally sent these row mutations to region servers to update the rows on 
> the index table regions. In the new design, this entire process is done on 
> the server side (within a coprocessor). So, the mapper just issues one RPC 
> call to instruct the coprocessor to build the entire table region. This RPC 
> call can timeout if the table region is large. The temporary solution that is 
> currently used is to set very large timeout values. We should break up single 
> table region rebuild into smaller operations and eliminate setting large 
> timeout values.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5533) Creating a view or index with a 4.14 client and 4.15.0 server fails with a NullPointerException

2019-10-22 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5533:
---
Attachment: 5533-4.x-HBase-1.5-v2.txt

> Creating a view or index with a 4.14 client and 4.15.0 server fails with a 
> NullPointerException
> ---
>
> Key: PHOENIX-5533
> URL: https://issues.apache.org/jira/browse/PHOENIX-5533
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Chinmay Kulkarni
>Assignee: Lars Hofhansl
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0
>
> Attachments: 5533-4.x-HBase-1.5-v2.txt, 5533-4.x-HBase-1.5.txt, 
> 5533-4.x-HBase-1.5.txt
>
>
> *When calling "CREATE VIEW":*
> {code:java}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1735)
>   ... 9 more
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:144)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1464)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1428)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1613)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2731)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1115)
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$1.execute(CreateTableCompiler.java:192)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:410)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:392)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1829)
> {code}
> *Similarly when calling "CREATE INDEX"*
> {code:java}
> 2019-10-18 16:06:26,569 ERROR
> [RpcServer.FifoWFPBQ.default.handler=255,queue=21,port=16201]
> coprocessor.MetaDataEndpointImpl: createTable failed
> java.lang.NullPointerException
> at
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1758)
> at
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17218)
> at
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8270)
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2207)
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2189)
> at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:35076)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2373)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5404) Move check to client side to see if there are any child views that need to be dropped while receating a table/view

2019-10-22 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-5404:
--
Fix Version/s: 4.15.1

> Move check to client side to see if there are any child views that need to be 
> dropped while receating a table/view
> --
>
> Key: PHOENIX-5404
> URL: https://issues.apache.org/jira/browse/PHOENIX-5404
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Thomas D'Silva
>Priority: Major
> Fix For: 4.15.1
>
>
> Remove  {{ViewUtil.dropChildViews(env, tenantIdBytes, schemaName, 
> tableName);}} call in MetdataEndpointImpl.createTable
> While creating a table or view we need to ensure that are not any child views 
> that haven't been clean up by the DropChildView task yet. Move this check to 
> the client (issue a scan against SYSTEM.CHILD_LINK to see if a single linking 
> row exists).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5478) IndexTool mapper task should not timeout

2019-10-22 Thread Kadir OZDEMIR (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kadir OZDEMIR updated PHOENIX-5478:
---
Attachment: PHOENIX-5478.master.002.patch

> IndexTool mapper task should not timeout 
> -
>
> Key: PHOENIX-5478
> URL: https://issues.apache.org/jira/browse/PHOENIX-5478
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Kadir OZDEMIR
>Assignee: Kadir OZDEMIR
>Priority: Major
> Attachments: PHOENIX-5478.master.001.patch, 
> PHOENIX-5478.master.002.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> In the old design, the IndexTool MR job mapper first scanned the data table 
> rows one by one using a Phoenix client code, then constructed the index rows 
> and finally sent these row mutations to region servers to update the rows on 
> the index table regions. In the new design, this entire process is done on 
> the server side (within a coprocessor). So, the mapper just issues one RPC 
> call to instruct the coprocessor to build the entire table region. This RPC 
> call can timeout if the table region is large. The temporary solution that is 
> currently used is to set very large timeout values. We should break up single 
> table region rebuild into smaller operations and eliminate setting large 
> timeout values.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5542) UPSERT in VIEW fails using global connection if we have a view index on it and a global index on base table

2019-10-22 Thread Sandeep Pal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Pal updated PHOENIX-5542:
-
Description: 
Stacktrace:

 

 
{code:java}
0: jdbc:phoenix:stg2hdaas-mnds1-1-prd.eng.sfd> UPSERT INTO 
TEST.VIEW_ON_BASE(ORGANIZATION_ID,CREATED_DATE,CREATED_BY,ONE,TWO,THREE) VALUES 
('00D000SANP1',TO_DATE('1975-1-2 12:00:00'),'004','15','24', 
'34');
[main] INFO org.apache.phoenix.execute.MutationState - Abort successful
java.lang.ArrayIndexOutOfBoundsException: 127
at 
org.apache.phoenix.index.IndexMaintainer.initCachedState(IndexMaintainer.java:1614)
at org.apache.phoenix.index.IndexMaintainer.(IndexMaintainer.java:569)
at org.apache.phoenix.index.IndexMaintainer.create(IndexMaintainer.java:143)
at org.apache.phoenix.schema.PTableImpl.getIndexMaintainer(PTableImpl.java:1165)
at org.apache.phoenix.util.IndexUtil.generateIndexData(IndexUtil.java:325)
at org.apache.phoenix.execute.MutationState$1.next(MutationState.java:563)
at org.apache.phoenix.execute.MutationState$1.next(MutationState.java:521)
at org.apache.phoenix.execute.MutationState.send(MutationState.java:928)
at org.apache.phoenix.execute.MutationState.send(MutationState.java:1539)
at org.apache.phoenix.execute.MutationState.commit(MutationState.java:1362)
at org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:675)
at org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:671)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:671)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:413)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:392)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1829)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:813)
at sqlline.SqlLine.begin(SqlLine.java:686)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)
{code}
 

Steps to reproduce:

 
{code:java}
CREATE TABLE IF NOT EXISTS TEST.BASE_TABLE (
 ORGANIZATION_ID CHAR(15) NOT NULL,
 KEY_PREFIX CHAR(3) NOT NULL, 
 CREATED_DATE DATE,
 CREATED_BY CHAR(15),
 CONSTRAINT PK PRIMARY KEY (
 ORGANIZATION_ID,
 KEY_PREFIX
 )
 ) VERSIONS=1, MULTI_TENANT=true, IMMUTABLE_ROWS=true, REPLICATION_SCOPE=1;
CREATE VIEW IF NOT EXISTS TEST.VIEW_ON_BASE (
 ONE VARCHAR, 
 TWO VARCHAR, 
 THREE VARCHAR
 CONSTRAINT PKVIEW PRIMARY KEY
 (
 ONE, TWO
 )
 )
 AS SELECT * FROM TEST.BASE_TABLE WHERE KEY_PREFIX = '0CY';
CREATE INDEX IF NOT EXISTS VIEW_INDEX_ON_VIEW
 ON TEST.VIEW_ON_BASE (THREE)
 INCLUDE (ONE, TWO);
CREATE INDEX IF NOT EXISTS INDEX_ON_BASE_TABLE
 ON TEST.BASE_TABLE (CREATED_DATE)
 INCLUDE (CREATED_BY) ASYNC REPLICATION_SCOPE=1;
###UPSERT
 UPSERT INTO 
TEST.VIEW_ON_BASE(ORGANIZATION_ID,CREATED_DATE,CREATED_BY,ONE,TWO,THREE) VALUES 
('00D000SANP1',TO_DATE('1975-1-2 12:00:00'),'004','15','24', 
'34');
 
{code}
 

 

  was:
Stacktrace:

 

 
{code:java}
0: jdbc:phoenix:stg2hdaas-mnds1-1-prd.eng.sfd> UPSERT INTO 
TEST.ODR_MUTABLE_MT_10_22_2_VIEW(ORGANIZATION_ID,CREATED_DATE,CREATED_BY,ONE,TWO,THREE)
 VALUES ('00D000SANP1',TO_DATE('1975-1-2 
12:00:00'),'004','15','24', '34');
[main] INFO org.apache.phoenix.execute.MutationState - Abort successful
java.lang.ArrayIndexOutOfBoundsException: 127
at 
org.apache.phoenix.index.IndexMaintainer.initCachedState(IndexMaintainer.java:1614)
at org.apache.phoenix.index.IndexMaintainer.(IndexMaintainer.java:569)
at org.apache.phoenix.index.IndexMaintainer.create(IndexMaintainer.java:143)
at org.apache.phoenix.schema.PTableImpl.getIndexMaintainer(PTableImpl.java:1165)
at org.apache.phoenix.util.IndexUtil.generateIndexData(IndexUtil.java:325)
at org.apache.phoenix.execute.MutationState$1.next(MutationState.java:563)
at org.apache.phoenix.execute.MutationState$1.next(MutationState.java:521)
at org.apache.phoenix.execute.MutationState.send(MutationState.java:928)
at org.apache.phoenix.execute.MutationState.send(MutationState.java:1539)
at org.apache.phoenix.execute.MutationState.commit(MutationState.java:1362)
at org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:675)
at org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:671)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:671)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:413)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393)
at org.apache

[jira] [Updated] (PHOENIX-5542) UPSERT in VIEW fails using global connection if we have a view index on it and a global index on base table

2019-10-22 Thread Sandeep Pal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Pal updated PHOENIX-5542:
-
Description: 
Stacktrace:

 

 
{code:java}
0: jdbc:phoenix:stg2hdaas-mnds1-1-prd.eng.sfd> UPSERT INTO 
TEST.ODR_MUTABLE_MT_10_22_2_VIEW(ORGANIZATION_ID,CREATED_DATE,CREATED_BY,ONE,TWO,THREE)
 VALUES ('00D000SANP1',TO_DATE('1975-1-2 
12:00:00'),'004','15','24', '34');
[main] INFO org.apache.phoenix.execute.MutationState - Abort successful
java.lang.ArrayIndexOutOfBoundsException: 127
at 
org.apache.phoenix.index.IndexMaintainer.initCachedState(IndexMaintainer.java:1614)
at org.apache.phoenix.index.IndexMaintainer.(IndexMaintainer.java:569)
at org.apache.phoenix.index.IndexMaintainer.create(IndexMaintainer.java:143)
at org.apache.phoenix.schema.PTableImpl.getIndexMaintainer(PTableImpl.java:1165)
at org.apache.phoenix.util.IndexUtil.generateIndexData(IndexUtil.java:325)
at org.apache.phoenix.execute.MutationState$1.next(MutationState.java:563)
at org.apache.phoenix.execute.MutationState$1.next(MutationState.java:521)
at org.apache.phoenix.execute.MutationState.send(MutationState.java:928)
at org.apache.phoenix.execute.MutationState.send(MutationState.java:1539)
at org.apache.phoenix.execute.MutationState.commit(MutationState.java:1362)
at org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:675)
at org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:671)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:671)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:413)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:392)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1829)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:813)
at sqlline.SqlLine.begin(SqlLine.java:686)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)
{code}
 

Steps to reproduce:

 
{code:java}
CREATE TABLE IF NOT EXISTS TEST.BASE_TABLE (
 ORGANIZATION_ID CHAR(15) NOT NULL,
 KEY_PREFIX CHAR(3) NOT NULL, 
 CREATED_DATE DATE,
 CREATED_BY CHAR(15),
 CONSTRAINT PK PRIMARY KEY (
 ORGANIZATION_ID,
 KEY_PREFIX
 )
 ) VERSIONS=1, MULTI_TENANT=true, IMMUTABLE_ROWS=true, REPLICATION_SCOPE=1;
CREATE VIEW IF NOT EXISTS TEST.VIEW_ON_BASE (
 ONE VARCHAR, 
 TWO VARCHAR, 
 THREE VARCHAR
 CONSTRAINT PKVIEW PRIMARY KEY
 (
 ONE, TWO
 )
 )
 AS SELECT * FROM TEST.BASE_TABLE WHERE KEY_PREFIX = '0CY';
CREATE INDEX IF NOT EXISTS VIEW_INDEX_ON_VIEW
 ON TEST.VIEW_ON_BASE (THREE)
 INCLUDE (ONE, TWO);
CREATE INDEX IF NOT EXISTS INDEX_ON_BASE_TABLE
 ON TEST.BASE_TABLE (CREATED_DATE)
 INCLUDE (CREATED_BY) ASYNC REPLICATION_SCOPE=1;
###UPSERT
 UPSERT INTO 
TEST.VIEW_ON_BASE(ORGANIZATION_ID,CREATED_DATE,CREATED_BY,ONE,TWO,THREE) VALUES 
('00D000SANP1',TO_DATE('1975-1-2 12:00:00'),'004','15','24', 
'34');
 
{code}
 

 

  was:
Stacktrace:

 

 
{code:java}
0: jdbc:phoenix:stg2hdaas-mnds1-1-prd.eng.sfd> UPSERT INTO 
TEST.ODR_MUTABLE_MT_10_22_2_VIEW(ORGANIZATION_ID,CREATED_DATE,CREATED_BY,ONE,TWO,THREE)
 VALUES ('00D000SANP1',TO_DATE('1975-1-2 
12:00:00'),'004','15','24', '34');
[main] INFO org.apache.phoenix.execute.MutationState - Abort successful
java.lang.ArrayIndexOutOfBoundsException: 127
at 
org.apache.phoenix.index.IndexMaintainer.initCachedState(IndexMaintainer.java:1614)
at org.apache.phoenix.index.IndexMaintainer.(IndexMaintainer.java:569)
at org.apache.phoenix.index.IndexMaintainer.create(IndexMaintainer.java:143)
at org.apache.phoenix.schema.PTableImpl.getIndexMaintainer(PTableImpl.java:1165)
at org.apache.phoenix.util.IndexUtil.generateIndexData(IndexUtil.java:325)
at org.apache.phoenix.execute.MutationState$1.next(MutationState.java:563)
at org.apache.phoenix.execute.MutationState$1.next(MutationState.java:521)
at org.apache.phoenix.execute.MutationState.send(MutationState.java:928)
at org.apache.phoenix.execute.MutationState.send(MutationState.java:1539)
at org.apache.phoenix.execute.MutationState.commit(MutationState.java:1362)
at org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:675)
at org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:671)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:671)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:413)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393

[jira] [Created] (PHOENIX-5542) UPSERT in VIEW fails using global connection if we have a view index on it and a global index on base table

2019-10-22 Thread Sandeep Pal (Jira)
Sandeep Pal created PHOENIX-5542:


 Summary: UPSERT in VIEW fails using global connection if we have a 
view index on it and a global index on base table
 Key: PHOENIX-5542
 URL: https://issues.apache.org/jira/browse/PHOENIX-5542
 Project: Phoenix
  Issue Type: Bug
 Environment: {code:java}
// code placeholder
{code}
Reporter: Sandeep Pal


Stacktrace:

 

 
{code:java}
0: jdbc:phoenix:stg2hdaas-mnds1-1-prd.eng.sfd> UPSERT INTO 
TEST.ODR_MUTABLE_MT_10_22_2_VIEW(ORGANIZATION_ID,CREATED_DATE,CREATED_BY,ONE,TWO,THREE)
 VALUES ('00D000SANP1',TO_DATE('1975-1-2 
12:00:00'),'004','15','24', '34');
[main] INFO org.apache.phoenix.execute.MutationState - Abort successful
java.lang.ArrayIndexOutOfBoundsException: 127
at 
org.apache.phoenix.index.IndexMaintainer.initCachedState(IndexMaintainer.java:1614)
at org.apache.phoenix.index.IndexMaintainer.(IndexMaintainer.java:569)
at org.apache.phoenix.index.IndexMaintainer.create(IndexMaintainer.java:143)
at org.apache.phoenix.schema.PTableImpl.getIndexMaintainer(PTableImpl.java:1165)
at org.apache.phoenix.util.IndexUtil.generateIndexData(IndexUtil.java:325)
at org.apache.phoenix.execute.MutationState$1.next(MutationState.java:563)
at org.apache.phoenix.execute.MutationState$1.next(MutationState.java:521)
at org.apache.phoenix.execute.MutationState.send(MutationState.java:928)
at org.apache.phoenix.execute.MutationState.send(MutationState.java:1539)
at org.apache.phoenix.execute.MutationState.commit(MutationState.java:1362)
at org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:675)
at org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:671)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:671)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:413)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:392)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:380)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1829)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:813)
at sqlline.SqlLine.begin(SqlLine.java:686)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)
{code}
 

Steps to reproduce:


CREATE TABLE IF NOT EXISTS TEST.BASE_TABLE (
ORGANIZATION_ID CHAR(15) NOT NULL,
KEY_PREFIX CHAR(3) NOT NULL, 
CREATED_DATE DATE,
CREATED_BY CHAR(15),
CONSTRAINT PK PRIMARY KEY (
ORGANIZATION_ID,
KEY_PREFIX
)
) VERSIONS=1, MULTI_TENANT=true, IMMUTABLE_ROWS=true, REPLICATION_SCOPE=1;


CREATE VIEW IF NOT EXISTS TEST.VIEW_ON_BASE (
 ONE VARCHAR, 
 TWO VARCHAR, 
 THREE VARCHAR
 CONSTRAINT PKVIEW PRIMARY KEY
 (
 ONE, TWO
 )
)
AS SELECT * FROM TEST.BASE_TABLE WHERE KEY_PREFIX = '0CY';


CREATE INDEX IF NOT EXISTS VIEW_INDEX_ON_VIEW
ON TEST.VIEW_ON_BASE (THREE)
INCLUDE (ONE, TWO);


CREATE INDEX IF NOT EXISTS INDEX_ON_BASE_TABLE
ON TEST.BASE_TABLE (CREATED_DATE)
INCLUDE (CREATED_BY) ASYNC REPLICATION_SCOPE=1;

### UPSERT 
UPSERT INTO 
TEST.VIEW_ON_BASE(ORGANIZATION_ID,CREATED_DATE,CREATED_BY,ONE,TWO,THREE) VALUES 
('00D000SANP1',TO_DATE('1975-1-2 12:00:00'),'004','15','24', 
'34');

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5541) Redundant Global Mutable Index Writes

2019-10-22 Thread Geoffrey Jacoby (Jira)
Geoffrey Jacoby created PHOENIX-5541:


 Summary: Redundant Global Mutable Index Writes
 Key: PHOENIX-5541
 URL: https://issues.apache.org/jira/browse/PHOENIX-5541
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.3, 4.15.0
Reporter: Geoffrey Jacoby
Assignee: Geoffrey Jacoby


In the inaccurately-named IndexWriter.writeAndKillYourselfOnFailure, there's 
the following code:
{code:java}
public void writeAndKillYourselfOnFailure(Collection> 
indexUpdates, boolean allowLocalUpdates, int clientVersion) throws IOException 
{ 
// convert the strings to htableinterfaces to which we can talk and group by 
TABLE Multimap toWrite = 
resolveTableReferences(indexUpdates); 
writeAndKillYourselfOnFailure(toWrite, allowLocalUpdates, clientVersion); 
writeAndHandleFailure(toWrite, allowLocalUpdates, clientVersion); 
}
{code}
writeAndKillYourselfOnFailure and writeAndHandleFailure seem to be identical, 
which means that calling them both will result in the same index Cells being 
written twice. This shouldn't affect correctness, but it will affect 
performance and (temporarily, until compaction) storage



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5540) Full row index write at the last write phase for immutable global indexes

2019-10-22 Thread Kadir OZDEMIR (Jira)
Kadir OZDEMIR created PHOENIX-5540:
--

 Summary: Full row index write at the last write phase for 
immutable global indexes
 Key: PHOENIX-5540
 URL: https://issues.apache.org/jira/browse/PHOENIX-5540
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 5.0.0, 4.15.0
Reporter: Kadir OZDEMIR
Assignee: Gokcen Iskender


See PHOENIX-5539 for the reason for this improvement.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5539) Full row index write at the last write phase for mutable global indexes

2019-10-22 Thread Kadir OZDEMIR (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kadir OZDEMIR updated PHOENIX-5539:
---
Summary: Full row index write at the last write phase for mutable global 
indexes  (was: Full index write at the last write phase for mutable global 
indexes)

> Full row index write at the last write phase for mutable global indexes
> ---
>
> Key: PHOENIX-5539
> URL: https://issues.apache.org/jira/browse/PHOENIX-5539
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Kadir OZDEMIR
>Assignee: Kadir OZDEMIR
>Priority: Major
>
> In the original design for consistent indexes, we do three phase write. In 
> the first phase, we write full index rows with unverified status, then we 
> write data table rows, and finally we overwrite the index row status on the 
> rows written at the first phase and set it to unverified.
> Instead of writing full index row in the first phase, we can do full index 
> row write at the last phase. So, in the first phase, we can just write 
> unverified status for the index row. In the last row, we can do full row 
> index write at the last phase.
> This change does not impact the correctness of the design but improves 
> overall design in terms of efficiency. In the presence of concurrent writes, 
> we skip the last write phase. These writes leave the index writes in 
> unverified status. Similarly, if the first or second phase write fails, we do 
> not proceed with the third phase. 
> Since with this change, we will be writing only the empty column with the 
> unverified status value (i.e., 2) for index tables in these failure cases, 
> the storage usage will be improved as we will write less index data. This 
> change also opens up the solution domain for some problems, e.g., handling 
> replication lag issues (please see PHOENIX-5527).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5539) Full index write at the last write phase for mutable global indexes

2019-10-22 Thread Kadir OZDEMIR (Jira)
Kadir OZDEMIR created PHOENIX-5539:
--

 Summary: Full index write at the last write phase for mutable 
global indexes
 Key: PHOENIX-5539
 URL: https://issues.apache.org/jira/browse/PHOENIX-5539
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 5.0.0, 4.15.0
Reporter: Kadir OZDEMIR
Assignee: Kadir OZDEMIR


In the original design for consistent indexes, we do three phase write. In the 
first phase, we write full index rows with unverified status, then we write 
data table rows, and finally we overwrite the index row status on the rows 
written at the first phase and set it to unverified.

Instead of writing full index row in the first phase, we can do full index row 
write at the last phase. So, in the first phase, we can just write unverified 
status for the index row. In the last row, we can do full row index write at 
the last phase.

This change does not impact the correctness of the design but improves overall 
design in terms of efficiency. In the presence of concurrent writes, we skip 
the last write phase. These writes leave the index writes in unverified status. 
Similarly, if the first or second phase write fails, we do not proceed with the 
third phase. 

Since with this change, we will be writing only the empty column with the 
unverified status value (i.e., 2) for index tables in these failure cases, the 
storage usage will be improved as we will write less index data. This change 
also opens up the solution domain for some problems, e.g., handling replication 
lag issues (please see PHOENIX-5527).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5538) How to use loadbalance in PhoenixQueryServer

2019-10-22 Thread Song Jun (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Song Jun updated PHOENIX-5538:
--
Summary: How to use loadbalance in PhoenixQueryServer  (was: How to user 
loadbalance in PhoenixQueryServer)

> How to use loadbalance in PhoenixQueryServer
> 
>
> Key: PHOENIX-5538
> URL: https://issues.apache.org/jira/browse/PHOENIX-5538
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Song Jun
>Priority: Minor
>
> as described in the page(ZooKeeper-based load balancer):
> https://phoenix.apache.org/server.html
> How to use it from client? There is no show case here



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5538) How to user loadbalance in PhoenixQueryServer

2019-10-22 Thread Song Jun (Jira)
Song Jun created PHOENIX-5538:
-

 Summary: How to user loadbalance in PhoenixQueryServer
 Key: PHOENIX-5538
 URL: https://issues.apache.org/jira/browse/PHOENIX-5538
 Project: Phoenix
  Issue Type: Improvement
Reporter: Song Jun


as described in the page(ZooKeeper-based load balancer):
https://phoenix.apache.org/server.html

How to use it from client? There is no show case here



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5535) Index rebuilds via UngroupedAggregateRegionObserver should replay delete markers

2019-10-22 Thread Kadir OZDEMIR (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kadir OZDEMIR updated PHOENIX-5535:
---
Attachment: PHOENIX-5535.master.003.patch

> Index rebuilds via UngroupedAggregateRegionObserver should replay delete 
> markers
> 
>
> Key: PHOENIX-5535
> URL: https://issues.apache.org/jira/browse/PHOENIX-5535
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.3
>Reporter: Kadir OZDEMIR
>Assignee: Kadir OZDEMIR
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5535.4.x-HBase-1.5.001.patch, 
> PHOENIX-5535.master.001.patch, PHOENIX-5535.master.002.patch, 
> PHOENIX-5535.master.003.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Currently index rebuilds for global index tables are done on the server side. 
> Phoenix client generates an aggregate plan using ServerBuildIndexCompiler to 
> scan every data table row on the server side . This complier sets the scan 
> attributes so that the row mutations that are scanned by 
> UngroupedRegionObserver are then replayed on the data table so that index 
> table rows are rebuilt. During this replay, data table row updates are 
> skipped and only index table row are updated.
> Phoenix allows column entries to have null values. Null values are 
> represented by HBase column delete marker. This means that index rebuild must 
> replay these delete markers along with put mutations. In order to do that 
> ServerBuildIndexCompiler should use raw scans but currently it does use 
> regular scans. This leads incorrect index rebuilds when null values are used.
> A simple test where a data table with one global index with a covered column 
> that can take null value is sufficient to reproduce this problem.
>  # Create a data table with columns  a,  b, and c where a is the primary key 
> and c can have null value
>  # Write one row with not null values
>  # Overwrite the covered column with null (i.e., set it to null) 
>  # Create an index on the table where b is the secondary key and c is covered 
> column
>  # Rebuild the index
>  # Dump the index table
> The index table row should have the null value for the covered column. 
> However, it has the not null value written at step 2.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)