[jira] [Updated] (PHOENIX-6007) PhoenixDB error handling improvements

2020-07-13 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-6007:
-
Description: If PhoenixDB receives a HTTP error that is not in the expected 
Jetty format, it will misidentify it as a protobuf error, and raise an 
exception that doesn't even include the HTTP error code (or possibly a protobuf 
exception)  (was: If PhoenixDB receives a HTTP error it not in the expected 
Jetty format, it will misidentify it as a protobuf error, and raise an 
exception that doesn't even include the HTTP error code (or possibly a protobuf 
exception))

> PhoenixDB error handling improvements
> -
>
> Key: PHOENIX-6007
> URL: https://issues.apache.org/jira/browse/PHOENIX-6007
> Project: Phoenix
>  Issue Type: Improvement
>  Components: queryserver
>Affects Versions: queryserver-1.0.0
>Reporter: Josh Elser
>Assignee: Istvan Toth
>Priority: Major
>
> If PhoenixDB receives a HTTP error that is not in the expected Jetty format, 
> it will misidentify it as a protobuf error, and raise an exception that 
> doesn't even include the HTTP error code (or possibly a protobuf exception)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6007) PhoenixDB error handling improvements

2020-07-13 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-6007:


 Summary: PhoenixDB error handling improvements
 Key: PHOENIX-6007
 URL: https://issues.apache.org/jira/browse/PHOENIX-6007
 Project: Phoenix
  Issue Type: Improvement
  Components: queryserver
Affects Versions: queryserver-1.0.0
Reporter: Josh Elser
Assignee: Istvan Toth


If PhoenixDB receives a HTTP error it not in the expected Jetty format, it will 
misidentify it as a protobuf error, and raise an exception that doesn't even 
include the HTTP error code (or possibly a protobuf exception)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-5967) phoenix-client transitively pulling in phoenix-core

2020-07-13 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth resolved PHOENIX-5967.
--
Fix Version/s: 4.16.0
   5.1.0
   Resolution: Fixed

Pushed to master and 4.x.

> phoenix-client transitively pulling in phoenix-core
> ---
>
> Key: PHOENIX-5967
> URL: https://issues.apache.org/jira/browse/PHOENIX-5967
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Istvan Toth
>Priority: Critical
> Fix For: 5.1.0, 4.16.0
>
>
> Looks like something happened in master where phoenix-client is now 
> transitively pulling in phoenix-core, even though all of phoenix-core and its 
> dependencies are included in the phoenix-client shaded artifact.
> 4.15.0 looks OK, so maybe something inadvertent with the hbase version 
> classifier stuff, [~stoty]?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6006) Bump queryserver version to 6.0

2020-07-13 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-6006:


 Summary: Bump queryserver version to 6.0
 Key: PHOENIX-6006
 URL: https://issues.apache.org/jira/browse/PHOENIX-6006
 Project: Phoenix
  Issue Type: Improvement
  Components: queryserver
Affects Versions: queryserver-1.0.0
Reporter: Istvan Toth
Assignee: Istvan Toth


Releasing queryserver as 1.0.0 would mean that a newer release would have a 
lower version than either old 4.x or 5.x versions.

I propose bumping the version of the unbundled repo to 6.0 to have a monotonous 
version numbering.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6005) Bump connectors version to 6.0

2020-07-13 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-6005:


 Summary: Bump connectors version to 6.0
 Key: PHOENIX-6005
 URL: https://issues.apache.org/jira/browse/PHOENIX-6005
 Project: Phoenix
  Issue Type: Improvement
  Components: connectors
Affects Versions: connectors-1.0.0
Reporter: Istvan Toth
Assignee: Istvan Toth


Releasing connectors as 1.0.0 would mean that a newer release would have a 
lower version than either old 4.x or 5.x versions.

I propose bumping the version of the unbundled repo to 6.0 to have a monotonous 
version numbering.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-5994) SqlAlchemy schema filtering incorrect semantics

2020-07-13 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth resolved PHOENIX-5994.
--
Resolution: Fixed

Committed. 

Thanks for the reviews and the valuable feedback [~elserj]!

> SqlAlchemy schema filtering incorrect semantics 
> 
>
> Key: PHOENIX-5994
> URL: https://issues.apache.org/jira/browse/PHOENIX-5994
> Project: Phoenix
>  Issue Type: Bug
>  Components: queryserver
>Affects Versions: queryserver-1.0.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> The SqlAlchemy driver interprets the  _schema=None_ parameter as any/all 
> schema.
> According to the SqlAlchemy docs this should mean default schema.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-5996) IndexRebuildRegionScanner.prepareIndexMutationsForRebuild may incorrectly delete index row when a delete and put mutation with the same timestamp

2020-07-13 Thread chenglei (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenglei resolved PHOENIX-5996.
---
Fix Version/s: 4.16.0
   5.1.0
 Assignee: chenglei
   Resolution: Fixed

> IndexRebuildRegionScanner.prepareIndexMutationsForRebuild may incorrectly 
> delete index row when a delete and put mutation with the same timestamp
> -
>
> Key: PHOENIX-5996
> URL: https://issues.apache.org/jira/browse/PHOENIX-5996
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.0, 4.16.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> With PHOENIX-5748, 
> {{IndexRebuildRegionScanner.prepareIndexMutationsForRebuild}} is responsible 
> for generating index table mutations for rebuild.
> In the processing of data table mutations list, there can be a delete and put 
> mutation with the same timestamp.   If so, the delete and put are processed 
> together in one iteration. First, the delete mutation is applied on the put 
> mutation and current row state, and then the modified put mutation is 
> processed.
> But when the {{modified put mutation}} is empty , even the current row state 
> is not empty after the delete mutation is applied, the whole index row is  
> deleted, just as following line 1191 in 
> {{IndexRebuildRegionScanner.prepareIndexMutationsForRebuild}}:
> {code:java}
> 1189  } else {
> 1190if (currentDataRowState != null) {
> 1191Mutation del = 
> indexMaintainer.buildRowDeleteMutation(indexRowKeyForCurrentDataRow,
> 1192IndexMaintainer.DeleteType.ALL_VERSIONS, 
> ts);
> 1193indexMutations.add(del);
> 1194// For the next iteration of the for loop
> 1195currentDataRowState = null;
> 1196indexRowKeyForCurrentDataRow = null;
> 1197}
> 1198  }
> {code} 
> I think above logical is wrong, when the current row state is not empty after 
> the delete mutation is applied, we can not  delete the whole index row, but 
> instead we should reuse the logical of applying a delete mutation on current 
> row state.  I wrote a unit test in {{PrepareIndexMutationsForRebuildTest}} to 
> produce the case:
> {code:java}
> @Test
> public void testPutDeleteOnSameTimeStampAndPutNullifiedByDelete() throws 
> Exception {
> SetupInfo info = setup(
> TABLE_NAME,
> INDEX_NAME,
> "ROW_KEY VARCHAR, CF1.C1 VARCHAR, CF2.C2 VARCHAR",
> "CF2.C2",
> "ROW_KEY",
> "");
> Put dataPut = new Put(Bytes.toBytes(ROW_KEY));
> addCellToPutMutation(
> dataPut,
> Bytes.toBytes("CF2"),
> Bytes.toBytes("C2"),
> 1,
> Bytes.toBytes("v2"));
> addEmptyColumnToDataPutMutation(dataPut, info.pDataTable, 1);
> 
> addCellToPutMutation(
> dataPut,
> Bytes.toBytes("CF1"),
> Bytes.toBytes("C1"),
> 2,
> Bytes.toBytes("v1"));
> addEmptyColumnToDataPutMutation(dataPut, info.pDataTable, 2);
> Delete dataDel = new Delete(Bytes.toBytes(ROW_KEY));
> addCellToDelMutation(
> dataDel,
> Bytes.toBytes("CF1"),
> null,
> 2,
> KeyValue.Type.DeleteFamily);
> List actualIndexMutations = 
> IndexRebuildRegionScanner.prepareIndexMutationsForRebuild(
> info.indexMaintainer,
> dataPut,
> dataDel);
> List expectedIndexMutations = new ArrayList<>();
> byte[] idxKeyBytes = generateIndexRowKey("v2");
> 
> Put idxPut1 = new Put(idxKeyBytes);
> addEmptyColumnToIndexPutMutation(idxPut1, info.indexMaintainer, 1);
> expectedIndexMutations.add(idxPut1);
> 
> Put idxPut2 = new Put(idxKeyBytes);
> addEmptyColumnToIndexPutMutation(idxPut2, info.indexMaintainer, 2);
> expectedIndexMutations.add(idxPut2);
> assertEqualMutationList(expectedIndexMutations, actualIndexMutations);
> }
> {code} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6000) Client side DELETEs should use local indexes for filtering

2020-07-13 Thread Lars Hofhansl (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-6000:
---
Attachment: 6000-4.x-HBase-1.5-v2.txt

> Client side DELETEs should use local indexes for filtering
> --
>
> Key: PHOENIX-6000
> URL: https://issues.apache.org/jira/browse/PHOENIX-6000
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Major
> Attachments: 6000-4.x-HBase-1.5-v2.txt, 6000-4.x-HBase-1.5.txt
>
>
> I just noticed that client side DELETEs do not use local indexes for 
> filtering if they do not cover all keys of all other indexes.
> Unless I am missing something this is not necessary, for local indexes we 
> have the data available in the region and can fill back the uncovered column 
> (just like we do for SELECTs)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-5632) Add more information to SYSTEM.TASK TASK_DATA field apart from the task status

2020-07-13 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni reassigned PHOENIX-5632:
-

Assignee: (was: Christine Feng)

> Add more information to SYSTEM.TASK TASK_DATA field apart from the task status
> --
>
> Key: PHOENIX-5632
> URL: https://issues.apache.org/jira/browse/PHOENIX-5632
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Minor
>  Labels: beginner, newbie
> Fix For: 4.15.1, 4.16.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> It would be helpful for debugging if we could add more information to the 
> TASK_DATA json that is upserted into SYSTEM.TASK apart from just the task 
> status. For example, in failures cases, perhaps we can add the stack trace 
> for the failing task.
>  
> Ideas:
>  * Stacktrace in case of error
>  * Time taken for task to complete
>  * Name(s) of deleted child view(s)/table(s) per task
>  * Task_type column is represented by int; may be useful to include task type 
> in task_data column



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5665) Add static analysis tool or linter runs to CI/preCommit build

2020-07-13 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-5665:
--
Labels: phoenix-hardening quality-improvement  (was: phoenix-hardening)

> Add static analysis tool or linter runs to CI/preCommit build
> -
>
> Key: PHOENIX-5665
> URL: https://issues.apache.org/jira/browse/PHOENIX-5665
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Major
>  Labels: phoenix-hardening, quality-improvement
>
> There are various JIRAs stemming from static analysis tools such as 
> [PHOENIX-5167|https://issues.apache.org/jira/browse/PHOENIX-5167] and we will 
> benefit from adding this analysis to our general CI/build framework.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-5980) MUTATION_BATCH_FAILED_SIZE metric is incorrectly updated for failing delete mutations

2020-07-13 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni reassigned PHOENIX-5980:
-

Assignee: (was: Chinmay Kulkarni)

> MUTATION_BATCH_FAILED_SIZE metric is incorrectly updated for failing delete 
> mutations
> -
>
> Key: PHOENIX-5980
> URL: https://issues.apache.org/jira/browse/PHOENIX-5980
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Major
>  Labels: metrics, phoenix-hardening, quality-improvement
> Fix For: 4.16.0
>
>
> In the conn.commit() path, we get the number of mutations that failed to be 
> committed in the catch block of MutationState.sendMutations() (see 
> [here|https://github.com/apache/phoenix/blob/dcc88af8acc2ba8df10d2e9d498ab3646fdf0a78/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java#L1195-L1198]).
>  
> In case of delete mutations, the uncommittedStatementIndexes.length always 
> resolves to 1 and we always update the metric value by 1 in this case, even 
> though the actual mutation list corresponds to multiple DELETE mutations 
> which failed. In case of upserts, using unCommittedStatementIndexes.length is 
> fine since each upsert query corresponds to 1 Put. We should fix the logic 
> for deletes/mixed delete + upsert mutation batch failures.
> This wrong value is propagated to global client metrics as well as 
> MutationMetricQueue metrics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5985) Consolidate custom EnvironmentEdge classes throughout tests

2020-07-13 Thread Chinmay Kulkarni (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated PHOENIX-5985:
--
Labels: beginner newbie phoenix-hardening quality-improvement  (was: 
beginner newbie)

> Consolidate custom EnvironmentEdge classes throughout tests
> ---
>
> Key: PHOENIX-5985
> URL: https://issues.apache.org/jira/browse/PHOENIX-5985
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Minor
>  Labels: beginner, newbie, phoenix-hardening, quality-improvement
>
> A lot of our tests use custom EnvironmentEdge "clocks" which have the 
> same/similar functionality. We should create 1 common implementation, put it 
> in a TestUtil class and use that everywhere.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6004) Metadata operations cannot filter on case-sensitive names

2020-07-13 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-6004:


 Summary: Metadata operations cannot filter on case-sensitive names
 Key: PHOENIX-6004
 URL: https://issues.apache.org/jira/browse/PHOENIX-6004
 Project: Phoenix
  Issue Type: Bug
  Components: core
Affects Versions: 5.1.0, 4.16.0
Reporter: Istvan Toth


The MetaData operations cannot filter on case-sensitive table, schema, column, 
or catalog names.

Looks like at least QueryUtil.getTablesStmt() misses the logic to detect and 
handle the case-sensitive names 

expected behaviour:
{code:java}
meta.getTables(null, null, "\"CamelCase\"", null){code}
should find the table "CamelCase", instead of "CAMELCASE"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6003) Metadata operations via Avatica lose functionality

2020-07-13 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-6003:


 Summary: Metadata operations via Avatica lose functionality
 Key: PHOENIX-6003
 URL: https://issues.apache.org/jira/browse/PHOENIX-6003
 Project: Phoenix
  Issue Type: Bug
  Components: queryserver
Affects Versions: queryserver-1.0.0
Reporter: Istvan Toth
Assignee: Istvan Toth


PhoenixDatabaseMetaData.getTables(), and some other functions have parameters 
(catalog, schemaPattern), where null and empty String have different semantics.

The corresponding protobuf fields in Avatica are not nullable, and PQS seems to 
interpret empty string as Null, making it impossible to get the default 
catalog/schema only.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)