[jira] [Commented] (PHOENIX-7171) Update Zookeeper to 3.8.3 when building with HBase 2.4+

2024-01-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807614#comment-17807614
 ] 

ASF GitHub Bot commented on PHOENIX-7171:
-

stoty closed pull request #1797: PHOENIX-7171 Update Zookeeper to 3.8.3 when 
building with HBase 2.4+
URL: https://github.com/apache/phoenix/pull/1797




> Update Zookeeper to 3.8.3 when building with HBase 2.4+
> ---
>
> Key: PHOENIX-7171
> URL: https://issues.apache.org/jira/browse/PHOENIX-7171
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.2.0, 5.1.3
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> HBase has recently upgraded to ZK 3.8 on all active branhces.
> Phoenix currently downgrades ZK to 3.5.7 in its dependencies and the shaded 
> artifacts.
> Find a way to avoid that
> * Can we avoid explicitly depending on ZK, and taking the transitve 
> dependency from HBase ?
> * Does this work with Omid ? (exclude ZK from Omid dependency ?)
> * Do the Curator versions in Omid an Phoenix work with ZK 3.8 ?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7171) Update Zookeeper to 3.8.3 when building with HBase 2.4+

2024-01-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807615#comment-17807615
 ] 

ASF GitHub Bot commented on PHOENIX-7171:
-

stoty commented on PR #1797:
URL: https://github.com/apache/phoenix/pull/1797#issuecomment-1895267257

   merged manually




> Update Zookeeper to 3.8.3 when building with HBase 2.4+
> ---
>
> Key: PHOENIX-7171
> URL: https://issues.apache.org/jira/browse/PHOENIX-7171
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.2.0, 5.1.3
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> HBase has recently upgraded to ZK 3.8 on all active branhces.
> Phoenix currently downgrades ZK to 3.5.7 in its dependencies and the shaded 
> artifacts.
> Find a way to avoid that
> * Can we avoid explicitly depending on ZK, and taking the transitve 
> dependency from HBase ?
> * Does this work with Omid ? (exclude ZK from Omid dependency ?)
> * Do the Curator versions in Omid an Phoenix work with ZK 3.8 ?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] PHOENIX-7171 Update Zookeeper to 3.8.3 when building with HBase 2.4+ [phoenix]

2024-01-16 Thread via GitHub


stoty closed pull request #1797: PHOENIX-7171 Update Zookeeper to 3.8.3 when 
building with HBase 2.4+
URL: https://github.com/apache/phoenix/pull/1797


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] PHOENIX-7171 Update Zookeeper to 3.8.3 when building with HBase 2.4+ [phoenix]

2024-01-16 Thread via GitHub


stoty commented on PR #1797:
URL: https://github.com/apache/phoenix/pull/1797#issuecomment-1895267257

   merged manually


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (PHOENIX-7181) Do not declare commons-configuration2 dependency

2024-01-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807601#comment-17807601
 ] 

ASF GitHub Bot commented on PHOENIX-7181:
-

stoty commented on PR #1796:
URL: https://github.com/apache/phoenix/pull/1796#issuecomment-1895212335

   merged manually




> Do not declare commons-configuration2 dependency
> 
>
> Key: PHOENIX-7181
> URL: https://issues.apache.org/jira/browse/PHOENIX-7181
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.2.0, 5.1.4
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> I've tried to update the commons-configuration2 version in PHOENIX-7163, but 
> that would also require dependency managing the commons-text version.
> As we only use commons-configuration2 because the Hadoop API leaks it, the 
> better solution is just not declaring the explicit dependecy, and relying on 
> the Hadoop transitive dependency.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] PHOENIX-7181 Do not declare commons-configuration2 dependency [phoenix]

2024-01-16 Thread via GitHub


stoty commented on PR #1796:
URL: https://github.com/apache/phoenix/pull/1796#issuecomment-1895212335

   merged manually


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (PHOENIX-7181) Do not declare commons-configuration2 dependency

2024-01-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807600#comment-17807600
 ] 

ASF GitHub Bot commented on PHOENIX-7181:
-

stoty closed pull request #1796: PHOENIX-7181 Do not declare 
commons-configuration2 dependency
URL: https://github.com/apache/phoenix/pull/1796




> Do not declare commons-configuration2 dependency
> 
>
> Key: PHOENIX-7181
> URL: https://issues.apache.org/jira/browse/PHOENIX-7181
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.2.0, 5.1.4
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> I've tried to update the commons-configuration2 version in PHOENIX-7163, but 
> that would also require dependency managing the commons-text version.
> As we only use commons-configuration2 because the Hadoop API leaks it, the 
> better solution is just not declaring the explicit dependecy, and relying on 
> the Hadoop transitive dependency.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] PHOENIX-7181 Do not declare commons-configuration2 dependency [phoenix]

2024-01-16 Thread via GitHub


stoty closed pull request #1796: PHOENIX-7181 Do not declare 
commons-configuration2 dependency
URL: https://github.com/apache/phoenix/pull/1796


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (PHOENIX-7181) Do not declare commons-configuration2 dependency

2024-01-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807599#comment-17807599
 ] 

ASF GitHub Bot commented on PHOENIX-7181:
-

stoty commented on PR #1796:
URL: https://github.com/apache/phoenix/pull/1796#issuecomment-1895209560

   Thank you.
   The Asflicence check often gives false positives, as it sometimes finds and 
flags generated files.
   We could probably improve that test in Yetus. (Or maybe a Yetus upgrade 
would solve that problem)




> Do not declare commons-configuration2 dependency
> 
>
> Key: PHOENIX-7181
> URL: https://issues.apache.org/jira/browse/PHOENIX-7181
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.2.0, 5.1.4
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> I've tried to update the commons-configuration2 version in PHOENIX-7163, but 
> that would also require dependency managing the commons-text version.
> As we only use commons-configuration2 because the Hadoop API leaks it, the 
> better solution is just not declaring the explicit dependecy, and relying on 
> the Hadoop transitive dependency.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] PHOENIX-7181 Do not declare commons-configuration2 dependency [phoenix]

2024-01-16 Thread via GitHub


stoty commented on PR #1796:
URL: https://github.com/apache/phoenix/pull/1796#issuecomment-1895209560

   Thank you.
   The Asflicence check often gives false positives, as it sometimes finds and 
flags generated files.
   We could probably improve that test in Yetus. (Or maybe a Yetus upgrade 
would solve that problem)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (PHOENIX-7130) Support skipping of shade sources jar creation

2024-01-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807586#comment-17807586
 ] 

ASF GitHub Bot commented on PHOENIX-7130:
-

virajjasani commented on PR #1745:
URL: https://github.com/apache/phoenix/pull/1745#issuecomment-1895161776

   pom changes are fine so far, but yeah any other big changes or feature 
changes that have potential to cause any additional issues (more than what is 
known with #1736 and make it worse) are potential blockers e.g. JSON support, 
which is technically ready for merge (though still in final review phase) but i 
would like to block it.




> Support skipping of shade sources jar creation
> --
>
> Key: PHOENIX-7130
> URL: https://issues.apache.org/jira/browse/PHOENIX-7130
> Project: Phoenix
>  Issue Type: Improvement
>  Components: phoenix
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Minor
>  Labels: build
>
> Shade sources jar creation takes a lot of time and we do not want to do this 
> for every dev build (in our internal phoenix jenkins). Hence with this Jira, 
> will add a profile to disable shade jar creation optionally by running with 
> '-PskipShadeSources'.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] PHOENIX-7130 Support skipping of shade sources jar creation [phoenix]

2024-01-16 Thread via GitHub


virajjasani commented on PR #1745:
URL: https://github.com/apache/phoenix/pull/1745#issuecomment-1895161776

   pom changes are fine so far, but yeah any other big changes or feature 
changes that have potential to cause any additional issues (more than what is 
known with #1736 and make it worse) are potential blockers e.g. JSON support, 
which is technically ready for merge (though still in final review phase) but i 
would like to block it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (PHOENIX-7130) Support skipping of shade sources jar creation

2024-01-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807584#comment-17807584
 ] 

ASF GitHub Bot commented on PHOENIX-7130:
-

stoty commented on PR #1745:
URL: https://github.com/apache/phoenix/pull/1745#issuecomment-1895152920

   Also, I'm going to merge the phoenix-server shading refactor soon (another 
pom only change), It would probably be a good idea to  add this change there as 
well.




> Support skipping of shade sources jar creation
> --
>
> Key: PHOENIX-7130
> URL: https://issues.apache.org/jira/browse/PHOENIX-7130
> Project: Phoenix
>  Issue Type: Improvement
>  Components: phoenix
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Minor
>  Labels: build
>
> Shade sources jar creation takes a lot of time and we do not want to do this 
> for every dev build (in our internal phoenix jenkins). Hence with this Jira, 
> will add a profile to disable shade jar creation optionally by running with 
> '-PskipShadeSources'.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7130) Support skipping of shade sources jar creation

2024-01-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807583#comment-17807583
 ] 

ASF GitHub Bot commented on PHOENIX-7130:
-

virajjasani commented on PR #1745:
URL: https://github.com/apache/phoenix/pull/1745#issuecomment-1895151463

   I think it's fine to merge this since it adds the option to skip the shaded 
jar creation.




> Support skipping of shade sources jar creation
> --
>
> Key: PHOENIX-7130
> URL: https://issues.apache.org/jira/browse/PHOENIX-7130
> Project: Phoenix
>  Issue Type: Improvement
>  Components: phoenix
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Minor
>  Labels: build
>
> Shade sources jar creation takes a lot of time and we do not want to do this 
> for every dev build (in our internal phoenix jenkins). Hence with this Jira, 
> will add a profile to disable shade jar creation optionally by running with 
> '-PskipShadeSources'.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] PHOENIX-7130 Support skipping of shade sources jar creation [phoenix]

2024-01-16 Thread via GitHub


stoty commented on PR #1745:
URL: https://github.com/apache/phoenix/pull/1745#issuecomment-1895152920

   Also, I'm going to merge the phoenix-server shading refactor soon (another 
pom only change), It would probably be a good idea to  add this change there as 
well.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] PHOENIX-7130 Support skipping of shade sources jar creation [phoenix]

2024-01-16 Thread via GitHub


virajjasani commented on PR #1745:
URL: https://github.com/apache/phoenix/pull/1745#issuecomment-1895151463

   I think it's fine to merge this since it adds the option to skip the shaded 
jar creation.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (PHOENIX-7130) Support skipping of shade sources jar creation

2024-01-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807582#comment-17807582
 ] 

ASF GitHub Bot commented on PHOENIX-7130:
-

stoty commented on PR #1745:
URL: https://github.com/apache/phoenix/pull/1745#issuecomment-1895149696

   This is a pom only change, and #1736 doesn't touch the poms, so I wouldn't 
expect them to confict in any way.




> Support skipping of shade sources jar creation
> --
>
> Key: PHOENIX-7130
> URL: https://issues.apache.org/jira/browse/PHOENIX-7130
> Project: Phoenix
>  Issue Type: Improvement
>  Components: phoenix
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Minor
>  Labels: build
>
> Shade sources jar creation takes a lot of time and we do not want to do this 
> for every dev build (in our internal phoenix jenkins). Hence with this Jira, 
> will add a profile to disable shade jar creation optionally by running with 
> '-PskipShadeSources'.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] PHOENIX-7130 Support skipping of shade sources jar creation [phoenix]

2024-01-16 Thread via GitHub


stoty commented on PR #1745:
URL: https://github.com/apache/phoenix/pull/1745#issuecomment-1895149696

   This is a pom only change, and #1736 doesn't touch the poms, so I wouldn't 
expect them to confict in any way.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (PHOENIX-7106) Data Integrity issues due to invalid rowkeys returned by various coprocessors

2024-01-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807581#comment-17807581
 ] 

ASF GitHub Bot commented on PHOENIX-7106:
-

virajjasani commented on code in PR #1736:
URL: https://github.com/apache/phoenix/pull/1736#discussion_r1454739552


##
phoenix-core-server/src/main/java/org/apache/phoenix/coprocessor/UncoveredIndexRegionScanner.java:
##
@@ -404,4 +421,37 @@ public boolean next(List result) throws IOException {
 region.closeRegionOperation();
 }
 }
+
+/**
+ * Add dummy cell to the result list based on either the previous rowkey 
returned to the
+ * client or the start rowkey and start rowkey include params.
+ *
+ * @param result result to add the dummy cell to.
+ * @param initStartRowKey scan start rowkey.
+ * @param includeInitStartRowKey scan start rowkey included.
+ * @param scan scan object.
+ */
+private void updateDummyWithPrevRowKey(List result, byte[] 
initStartRowKey,
+   boolean includeInitStartRowKey, 
Scan scan) {
+result.clear();
+if (previousResultRowKey != null) {
+getDummyResult(previousResultRowKey, result);
+} else {
+if (includeInitStartRowKey && initStartRowKey.length > 0) {
+byte[] prevKey;
+if (Bytes.compareTo(initStartRowKey, initStartRowKey.length - 
1,
+1, Bytes.toBytesBinary("\\x00"), 0, 1) == 0) {
+prevKey = new byte[initStartRowKey.length - 1];
+System.arraycopy(initStartRowKey, 0, prevKey, 0, 
prevKey.length);
+} else {
+prevKey = 
ByteUtil.previousKeyWithLength(ByteUtil.concat(initStartRowKey,
+new byte[10]), initStartRowKey.length + 10);

Review Comment:
   Addressing this with max row length logic turned out to be much complicated 
that what i thought earlier. Very few tests are able to cover the scenario 
where hbase client has to lookup the given row in meta after meta cache expires 
at the client, and when they do, meta lookup fails because meta lookup scan 
start rowkey becomes much larger by combining table name, the dummy rowkey (max 
length - 1) and some constant values added by client.
   
   i have addressed this in the latest commit: 
https://github.com/apache/phoenix/pull/1736/commits/fcbec0f676b08ce757036d33cf4687b797e2c781
   
   awaiting the build result before moving forward.





> Data Integrity issues due to invalid rowkeys returned by various coprocessors
> -
>
> Key: PHOENIX-7106
> URL: https://issues.apache.org/jira/browse/PHOENIX-7106
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0, 5.1.4
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Blocker
> Fix For: 5.2.0, 5.1.4
>
>
> HBase scanner interface expects server to perform scan of the cells from 
> HFile or Block cache and return consistent data i.e. rowkey of the cells 
> returned should stay in the range of the scan boundaries. When a region moves 
> and scanner needs reset, or if the current row is too large and the server 
> returns partial row, the subsequent scanner#next is supposed to return 
> remaining cells. When this happens, cell rowkeys returned by servers i.e. any 
> coprocessors is expected to be in the scan boundary range so that server can 
> reliably perform its validation and return remaining cells as expected.
> Phoenix client initiates serial or parallel scans from the aggregators based 
> on the region boundaries and the scan boundaries are sometimes adjusted based 
> on where optimizer provided key ranges, to include tenant boundaries, salt 
> boundaries etc. After the client opens the scanner and performs scan 
> operation, some of the coprocs return invalid rowkey for the following cases:
>  # Grouped aggregate queries
>  # Some Ungrouped aggregate queries
>  # Offset queries
>  # Dummy cells returned with empty rowkey
>  # Update statistics queries
>  # Uncovered Index queries
>  # Ordered results at server side
>  # ORDER BY DESC on rowkey
>  # Global Index read-repair
>  # Paging region scanner with HBase scanner reopen
>  # ORDER BY on non-pk column(s) with/without paging
>  # GROUP BY on non-pk column(s) with/without paging
> Since many of these cases return reserved rowkeys, they are likely not going 
> to match scan or region boundaries. It has potential to cause data integrity 
> issues in certain scenarios as explained above. Empty rowkey returned by 
> server can be treated as end of the region scan by HBase client.
> With the paging feature enabled, if the page size is kept low, we have higher 
> chances of 

Re: [PR] PHOENIX-7106 Data Integrity issues due to invalid rowkeys returned by various coprocessors [phoenix]

2024-01-16 Thread via GitHub


virajjasani commented on code in PR #1736:
URL: https://github.com/apache/phoenix/pull/1736#discussion_r1454739552


##
phoenix-core-server/src/main/java/org/apache/phoenix/coprocessor/UncoveredIndexRegionScanner.java:
##
@@ -404,4 +421,37 @@ public boolean next(List result) throws IOException {
 region.closeRegionOperation();
 }
 }
+
+/**
+ * Add dummy cell to the result list based on either the previous rowkey 
returned to the
+ * client or the start rowkey and start rowkey include params.
+ *
+ * @param result result to add the dummy cell to.
+ * @param initStartRowKey scan start rowkey.
+ * @param includeInitStartRowKey scan start rowkey included.
+ * @param scan scan object.
+ */
+private void updateDummyWithPrevRowKey(List result, byte[] 
initStartRowKey,
+   boolean includeInitStartRowKey, 
Scan scan) {
+result.clear();
+if (previousResultRowKey != null) {
+getDummyResult(previousResultRowKey, result);
+} else {
+if (includeInitStartRowKey && initStartRowKey.length > 0) {
+byte[] prevKey;
+if (Bytes.compareTo(initStartRowKey, initStartRowKey.length - 
1,
+1, Bytes.toBytesBinary("\\x00"), 0, 1) == 0) {
+prevKey = new byte[initStartRowKey.length - 1];
+System.arraycopy(initStartRowKey, 0, prevKey, 0, 
prevKey.length);
+} else {
+prevKey = 
ByteUtil.previousKeyWithLength(ByteUtil.concat(initStartRowKey,
+new byte[10]), initStartRowKey.length + 10);

Review Comment:
   Addressing this with max row length logic turned out to be much complicated 
that what i thought earlier. Very few tests are able to cover the scenario 
where hbase client has to lookup the given row in meta after meta cache expires 
at the client, and when they do, meta lookup fails because meta lookup scan 
start rowkey becomes much larger by combining table name, the dummy rowkey (max 
length - 1) and some constant values added by client.
   
   i have addressed this in the latest commit: 
https://github.com/apache/phoenix/pull/1736/commits/fcbec0f676b08ce757036d33cf4687b797e2c781
   
   awaiting the build result before moving forward.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (PHOENIX-7106) Data Integrity issues due to invalid rowkeys returned by various coprocessors

2024-01-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807580#comment-17807580
 ] 

ASF GitHub Bot commented on PHOENIX-7106:
-

virajjasani commented on PR #1736:
URL: https://github.com/apache/phoenix/pull/1736#issuecomment-1895124476

   @gjacoby126 i agree with your concerns and i really wish there was some 
level of possible way to make the tests generic, but honestly its almost 
impossible to make tests generic by using parameterized argument because of 
several reasons:
   
   1. We need to use `hbase.client.scanner.max.result.size` before setting up 
`PhoenixTestDriver`
   2. Set `phoenix.scanning.result.post.dummy.process` to a class that can 
process region moves after dummy is received by client (depending on how 
aggressive tests are, we need custom logic here)
   3. Set `phoenix.scanning.result.post.valid.process` for some tests
   4. Set `phoenix.server.page.size.ms` to 0ms to generate frequent timeouts at 
server side
   5. Depending on tests, we also need to move regions after specific num of 
`rs#next` calls
   6. Region move logic has to be very specific: only move regions of tables 
that are really going to be used by queries e.g. data table and index table for 
the given query. If we move all regions (including hbase and phoenix system 
tables), tests take 5x more time than what they do with the current setup.
   
   In fact, initially i was able to reduce the patch size from ~19k to < 15k by 
adding `ParallelStatsDisabledWithRegionMovesIT`, which is now being extended by 
several tests.




> Data Integrity issues due to invalid rowkeys returned by various coprocessors
> -
>
> Key: PHOENIX-7106
> URL: https://issues.apache.org/jira/browse/PHOENIX-7106
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0, 5.1.4
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Blocker
> Fix For: 5.2.0, 5.1.4
>
>
> HBase scanner interface expects server to perform scan of the cells from 
> HFile or Block cache and return consistent data i.e. rowkey of the cells 
> returned should stay in the range of the scan boundaries. When a region moves 
> and scanner needs reset, or if the current row is too large and the server 
> returns partial row, the subsequent scanner#next is supposed to return 
> remaining cells. When this happens, cell rowkeys returned by servers i.e. any 
> coprocessors is expected to be in the scan boundary range so that server can 
> reliably perform its validation and return remaining cells as expected.
> Phoenix client initiates serial or parallel scans from the aggregators based 
> on the region boundaries and the scan boundaries are sometimes adjusted based 
> on where optimizer provided key ranges, to include tenant boundaries, salt 
> boundaries etc. After the client opens the scanner and performs scan 
> operation, some of the coprocs return invalid rowkey for the following cases:
>  # Grouped aggregate queries
>  # Some Ungrouped aggregate queries
>  # Offset queries
>  # Dummy cells returned with empty rowkey
>  # Update statistics queries
>  # Uncovered Index queries
>  # Ordered results at server side
>  # ORDER BY DESC on rowkey
>  # Global Index read-repair
>  # Paging region scanner with HBase scanner reopen
>  # ORDER BY on non-pk column(s) with/without paging
>  # GROUP BY on non-pk column(s) with/without paging
> Since many of these cases return reserved rowkeys, they are likely not going 
> to match scan or region boundaries. It has potential to cause data integrity 
> issues in certain scenarios as explained above. Empty rowkey returned by 
> server can be treated as end of the region scan by HBase client.
> With the paging feature enabled, if the page size is kept low, we have higher 
> chances of scanners returning dummy cell, resulting in increased num of RPC 
> calls for better latency and timeouts. We should return only valid rowkey in 
> the scan range for all the cases where we perform above mentioned operations 
> like complex aggregate or offset queries etc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] PHOENIX-7106 Data Integrity issues due to invalid rowkeys returned by various coprocessors [phoenix]

2024-01-16 Thread via GitHub


virajjasani commented on PR #1736:
URL: https://github.com/apache/phoenix/pull/1736#issuecomment-1895124476

   @gjacoby126 i agree with your concerns and i really wish there was some 
level of possible way to make the tests generic, but honestly its almost 
impossible to make tests generic by using parameterized argument because of 
several reasons:
   
   1. We need to use `hbase.client.scanner.max.result.size` before setting up 
`PhoenixTestDriver`
   2. Set `phoenix.scanning.result.post.dummy.process` to a class that can 
process region moves after dummy is received by client (depending on how 
aggressive tests are, we need custom logic here)
   3. Set `phoenix.scanning.result.post.valid.process` for some tests
   4. Set `phoenix.server.page.size.ms` to 0ms to generate frequent timeouts at 
server side
   5. Depending on tests, we also need to move regions after specific num of 
`rs#next` calls
   6. Region move logic has to be very specific: only move regions of tables 
that are really going to be used by queries e.g. data table and index table for 
the given query. If we move all regions (including hbase and phoenix system 
tables), tests take 5x more time than what they do with the current setup.
   
   In fact, initially i was able to reduce the patch size from ~19k to < 15k by 
adding `ParallelStatsDisabledWithRegionMovesIT`, which is now being extended by 
several tests.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] PHOENIX-7006: Configure maxLookbackAge at table level [phoenix]

2024-01-16 Thread via GitHub


sanjeet006py commented on PR #1751:
URL: https://github.com/apache/phoenix/pull/1751#issuecomment-1894875868

   @gjacoby126 @haridsv please re-review the PR. Thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (PHOENIX-7006) Configure maxLookbackAge at table level

2024-01-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807529#comment-17807529
 ] 

ASF GitHub Bot commented on PHOENIX-7006:
-

sanjeet006py commented on PR #1751:
URL: https://github.com/apache/phoenix/pull/1751#issuecomment-1894875868

   @gjacoby126 @haridsv please re-review the PR. Thanks




> Configure maxLookbackAge at table level
> ---
>
> Key: PHOENIX-7006
> URL: https://issues.apache.org/jira/browse/PHOENIX-7006
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Viraj Jasani
>Assignee: Sanjeet Malhotra
>Priority: Major
>
> Phoenix max lookback age feature preserves live or deleted row versions that 
> are only visible through the max lookback window, it does not preserve any 
> unwanted row versions that should not be visible through the max lookback 
> window. More details on the max lookback redesign: PHOENIX-6888
> As of today, maxlookback age is only configurable at the cluster level 
> (config key: {_}phoenix.max.lookback.age.seconds{_}), meaning the same value 
> is used by all tables. This does not allow individual table level compaction 
> scanner to be able to retain data based on the table level maxlookback age. 
> Setting max lookback age at the table level can serve multiple purposes e.g. 
> change-data-capture (PHOENIX-7001) for individual table should have it's own 
> latest data retention period.
> The purpose of this Jira is to allow maxlookback age as a table level 
> property:
>  * New column in SYSTEM.CATALOG to preserve table level maxlookback age
>  * PTable object to read the value of maxlookback from SYSTEM.CATALOG
>  * Allow CREATE/ALTER TABLE DDLs to provide maxlookback attribute
>  * CompactionScanner should use table level maxlookbackAge, if available, 
> else use cluster level config



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-6996) Provide an upgrade path for Phoenix tables with HBase TTL to move their TTL spec to SYSTEM.CATALOG

2024-01-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807483#comment-17807483
 ] 

ASF GitHub Bot commented on PHOENIX-6996:
-

gjacoby126 commented on PR #1752:
URL: https://github.com/apache/phoenix/pull/1752#issuecomment-1894645565

   @lokiore - what happens if I don't want to upgrade HBase TTLs to Phoenix 
TTLs at upgrade time, but I want to do it later?




> Provide an upgrade path for Phoenix tables with HBase TTL to move their TTL 
> spec to SYSTEM.CATALOG
> --
>
> Key: PHOENIX-6996
> URL: https://issues.apache.org/jira/browse/PHOENIX-6996
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Jacob Isaac
>Assignee: Lokesh Khurana
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] PHOENIX-6996 :- Provide an upgrade path for Phoenix tables with HBase TTL to move the… [phoenix]

2024-01-16 Thread via GitHub


gjacoby126 commented on PR #1752:
URL: https://github.com/apache/phoenix/pull/1752#issuecomment-1894645565

   @lokiore - what happens if I don't want to upgrade HBase TTLs to Phoenix 
TTLs at upgrade time, but I want to do it later?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (PHOENIX-7130) Support skipping of shade sources jar creation

2024-01-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807482#comment-17807482
 ] 

ASF GitHub Bot commented on PHOENIX-7130:
-

gjacoby126 commented on PR #1745:
URL: https://github.com/apache/phoenix/pull/1745#issuecomment-1894644362

   (Note that we're waiting on #1736 before merging since it's a blocker.)




> Support skipping of shade sources jar creation
> --
>
> Key: PHOENIX-7130
> URL: https://issues.apache.org/jira/browse/PHOENIX-7130
> Project: Phoenix
>  Issue Type: Improvement
>  Components: phoenix
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Minor
>  Labels: build
>
> Shade sources jar creation takes a lot of time and we do not want to do this 
> for every dev build (in our internal phoenix jenkins). Hence with this Jira, 
> will add a profile to disable shade jar creation optionally by running with 
> '-PskipShadeSources'.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] PHOENIX-7130 Support skipping of shade sources jar creation [phoenix]

2024-01-16 Thread via GitHub


gjacoby126 commented on PR #1745:
URL: https://github.com/apache/phoenix/pull/1745#issuecomment-1894644362

   (Note that we're waiting on #1736 before merging since it's a blocker.)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (PHOENIX-7106) Data Integrity issues due to invalid rowkeys returned by various coprocessors

2024-01-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807480#comment-17807480
 ] 

ASF GitHub Bot commented on PHOENIX-7106:
-

gjacoby126 commented on PR #1736:
URL: https://github.com/apache/phoenix/pull/1736#issuecomment-1894641522

   A general question: looks like we're duplicating quite a lot of IT tests so 
that we can test them with region moves. This will require future test-writers 
to either duplicate their tests going forward, or the tests will gradually 
diverge. This will also increase the time to run the full test suite.
   
   Is there a way we can do region moves as a parameterized use case, and/or is 
there a more limited set of functionality we need to verify works with region 
moves that covers all the different scanner types but not necessarily all the 
higher level logic the existing IT tests need to verify?




> Data Integrity issues due to invalid rowkeys returned by various coprocessors
> -
>
> Key: PHOENIX-7106
> URL: https://issues.apache.org/jira/browse/PHOENIX-7106
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0, 5.1.4
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Blocker
> Fix For: 5.2.0, 5.1.4
>
>
> HBase scanner interface expects server to perform scan of the cells from 
> HFile or Block cache and return consistent data i.e. rowkey of the cells 
> returned should stay in the range of the scan boundaries. When a region moves 
> and scanner needs reset, or if the current row is too large and the server 
> returns partial row, the subsequent scanner#next is supposed to return 
> remaining cells. When this happens, cell rowkeys returned by servers i.e. any 
> coprocessors is expected to be in the scan boundary range so that server can 
> reliably perform its validation and return remaining cells as expected.
> Phoenix client initiates serial or parallel scans from the aggregators based 
> on the region boundaries and the scan boundaries are sometimes adjusted based 
> on where optimizer provided key ranges, to include tenant boundaries, salt 
> boundaries etc. After the client opens the scanner and performs scan 
> operation, some of the coprocs return invalid rowkey for the following cases:
>  # Grouped aggregate queries
>  # Some Ungrouped aggregate queries
>  # Offset queries
>  # Dummy cells returned with empty rowkey
>  # Update statistics queries
>  # Uncovered Index queries
>  # Ordered results at server side
>  # ORDER BY DESC on rowkey
>  # Global Index read-repair
>  # Paging region scanner with HBase scanner reopen
>  # ORDER BY on non-pk column(s) with/without paging
>  # GROUP BY on non-pk column(s) with/without paging
> Since many of these cases return reserved rowkeys, they are likely not going 
> to match scan or region boundaries. It has potential to cause data integrity 
> issues in certain scenarios as explained above. Empty rowkey returned by 
> server can be treated as end of the region scan by HBase client.
> With the paging feature enabled, if the page size is kept low, we have higher 
> chances of scanners returning dummy cell, resulting in increased num of RPC 
> calls for better latency and timeouts. We should return only valid rowkey in 
> the scan range for all the cases where we perform above mentioned operations 
> like complex aggregate or offset queries etc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7106) Data Integrity issues due to invalid rowkeys returned by various coprocessors

2024-01-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807481#comment-17807481
 ] 

ASF GitHub Bot commented on PHOENIX-7106:
-

gjacoby126 commented on PR #1736:
URL: https://github.com/apache/phoenix/pull/1736#issuecomment-1894641811

   That would also shrink the size of the patch significantly.




> Data Integrity issues due to invalid rowkeys returned by various coprocessors
> -
>
> Key: PHOENIX-7106
> URL: https://issues.apache.org/jira/browse/PHOENIX-7106
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0, 5.1.4
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Blocker
> Fix For: 5.2.0, 5.1.4
>
>
> HBase scanner interface expects server to perform scan of the cells from 
> HFile or Block cache and return consistent data i.e. rowkey of the cells 
> returned should stay in the range of the scan boundaries. When a region moves 
> and scanner needs reset, or if the current row is too large and the server 
> returns partial row, the subsequent scanner#next is supposed to return 
> remaining cells. When this happens, cell rowkeys returned by servers i.e. any 
> coprocessors is expected to be in the scan boundary range so that server can 
> reliably perform its validation and return remaining cells as expected.
> Phoenix client initiates serial or parallel scans from the aggregators based 
> on the region boundaries and the scan boundaries are sometimes adjusted based 
> on where optimizer provided key ranges, to include tenant boundaries, salt 
> boundaries etc. After the client opens the scanner and performs scan 
> operation, some of the coprocs return invalid rowkey for the following cases:
>  # Grouped aggregate queries
>  # Some Ungrouped aggregate queries
>  # Offset queries
>  # Dummy cells returned with empty rowkey
>  # Update statistics queries
>  # Uncovered Index queries
>  # Ordered results at server side
>  # ORDER BY DESC on rowkey
>  # Global Index read-repair
>  # Paging region scanner with HBase scanner reopen
>  # ORDER BY on non-pk column(s) with/without paging
>  # GROUP BY on non-pk column(s) with/without paging
> Since many of these cases return reserved rowkeys, they are likely not going 
> to match scan or region boundaries. It has potential to cause data integrity 
> issues in certain scenarios as explained above. Empty rowkey returned by 
> server can be treated as end of the region scan by HBase client.
> With the paging feature enabled, if the page size is kept low, we have higher 
> chances of scanners returning dummy cell, resulting in increased num of RPC 
> calls for better latency and timeouts. We should return only valid rowkey in 
> the scan range for all the cases where we perform above mentioned operations 
> like complex aggregate or offset queries etc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] PHOENIX-7106 Data Integrity issues due to invalid rowkeys returned by various coprocessors [phoenix]

2024-01-16 Thread via GitHub


gjacoby126 commented on PR #1736:
URL: https://github.com/apache/phoenix/pull/1736#issuecomment-1894641811

   That would also shrink the size of the patch significantly.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] PHOENIX-7106 Data Integrity issues due to invalid rowkeys returned by various coprocessors [phoenix]

2024-01-16 Thread via GitHub


gjacoby126 commented on PR #1736:
URL: https://github.com/apache/phoenix/pull/1736#issuecomment-1894641522

   A general question: looks like we're duplicating quite a lot of IT tests so 
that we can test them with region moves. This will require future test-writers 
to either duplicate their tests going forward, or the tests will gradually 
diverge. This will also increase the time to run the full test suite.
   
   Is there a way we can do region moves as a parameterized use case, and/or is 
there a more limited set of functionality we need to verify works with region 
moves that covers all the different scanner types but not necessarily all the 
higher level logic the existing IT tests need to verify?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (PHOENIX-7106) Data Integrity issues due to invalid rowkeys returned by various coprocessors

2024-01-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807445#comment-17807445
 ] 

ASF GitHub Bot commented on PHOENIX-7106:
-

virajjasani commented on PR #1736:
URL: https://github.com/apache/phoenix/pull/1736#issuecomment-1894518974

   
https://github.com/apache/phoenix/pull/1736/commits/41144ee9f9694c6c558256c2e6094cdd5ea60962
 caused some issues, working on it.




> Data Integrity issues due to invalid rowkeys returned by various coprocessors
> -
>
> Key: PHOENIX-7106
> URL: https://issues.apache.org/jira/browse/PHOENIX-7106
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0, 5.1.4
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Blocker
> Fix For: 5.2.0, 5.1.4
>
>
> HBase scanner interface expects server to perform scan of the cells from 
> HFile or Block cache and return consistent data i.e. rowkey of the cells 
> returned should stay in the range of the scan boundaries. When a region moves 
> and scanner needs reset, or if the current row is too large and the server 
> returns partial row, the subsequent scanner#next is supposed to return 
> remaining cells. When this happens, cell rowkeys returned by servers i.e. any 
> coprocessors is expected to be in the scan boundary range so that server can 
> reliably perform its validation and return remaining cells as expected.
> Phoenix client initiates serial or parallel scans from the aggregators based 
> on the region boundaries and the scan boundaries are sometimes adjusted based 
> on where optimizer provided key ranges, to include tenant boundaries, salt 
> boundaries etc. After the client opens the scanner and performs scan 
> operation, some of the coprocs return invalid rowkey for the following cases:
>  # Grouped aggregate queries
>  # Some Ungrouped aggregate queries
>  # Offset queries
>  # Dummy cells returned with empty rowkey
>  # Update statistics queries
>  # Uncovered Index queries
>  # Ordered results at server side
>  # ORDER BY DESC on rowkey
>  # Global Index read-repair
>  # Paging region scanner with HBase scanner reopen
>  # ORDER BY on non-pk column(s) with/without paging
>  # GROUP BY on non-pk column(s) with/without paging
> Since many of these cases return reserved rowkeys, they are likely not going 
> to match scan or region boundaries. It has potential to cause data integrity 
> issues in certain scenarios as explained above. Empty rowkey returned by 
> server can be treated as end of the region scan by HBase client.
> With the paging feature enabled, if the page size is kept low, we have higher 
> chances of scanners returning dummy cell, resulting in increased num of RPC 
> calls for better latency and timeouts. We should return only valid rowkey in 
> the scan range for all the cases where we perform above mentioned operations 
> like complex aggregate or offset queries etc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] PHOENIX-7106 Data Integrity issues due to invalid rowkeys returned by various coprocessors [phoenix]

2024-01-16 Thread via GitHub


virajjasani commented on PR #1736:
URL: https://github.com/apache/phoenix/pull/1736#issuecomment-1894518974

   
https://github.com/apache/phoenix/pull/1736/commits/41144ee9f9694c6c558256c2e6094cdd5ea60962
 caused some issues, working on it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (PHOENIX-7015) Extend UncoveredGlobalIndexRegionScanner for CDC region scanner usecase

2024-01-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807379#comment-17807379
 ] 

ASF GitHub Bot commented on PHOENIX-7015:
-

haridsv commented on code in PR #1794:
URL: https://github.com/apache/phoenix/pull/1794#discussion_r1453723232


##
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/CDCGlobalIndexRegionScanner.java:
##
@@ -104,115 +94,74 @@ protected Scan prepareDataTableScan(Collection 
dataRowKeys) throws IOExc
 protected boolean getNextCoveredIndexRow(List result) throws 
IOException {
 if (indexRowIterator.hasNext()) {
 List indexRow = indexRowIterator.next();
-for (Cell c: indexRow) {
-if (c.getType() == Cell.Type.Put) {
-result.add(c);
-}
-}
+Cell firstCell = indexRow.get(indexRow.size() - 1);
+byte[] indexRowKey = new ImmutableBytesPtr(firstCell.getRowArray(),
+firstCell.getRowOffset(), firstCell.getRowLength())
+.copyBytesIfNecessary();
+ImmutableBytesPtr dataRowKey = new ImmutableBytesPtr(
+indexToDataRowKeyMap.get(indexRowKey));
+Result dataRow = dataRows.get(dataRowKey);
+Long indexCellTs = firstCell.getTimestamp();
+Cell.Type indexCellType = firstCell.getType();
+
+Map preImageObj = new HashMap<>();
+Map changeImageObj = new HashMap<>();
+List resultCells = Arrays.asList(dataRow.rawCells());
+Collections.sort(resultCells, 
CellComparator.getInstance().reversed());
+
+boolean isIndexCellDeleteRow = false;
+boolean isIndexCellDeleteColumn = false;
 try {
-Result dataRow = null;
-if (! result.isEmpty()) {
-Cell firstCell = result.get(0);
-byte[] indexRowKey = new 
ImmutableBytesPtr(firstCell.getRowArray(),
-firstCell.getRowOffset(), firstCell.getRowLength())
-.copyBytesIfNecessary();
-ImmutableBytesPtr dataRowKey = new ImmutableBytesPtr(
-indexToDataRowKeyMap.get(indexRowKey));
-dataRow = dataRows.get(dataRowKey);
-Long indexRowTs = result.get(0).getTimestamp();
-Map> changeTimeline = 
dataRowChanges.get(
-dataRowKey);
-if (changeTimeline == null) {
-List resultCells = 
Arrays.asList(dataRow.rawCells());
-Collections.sort(resultCells, 
CellComparator.getInstance().reversed());
-List deleteMarkers = new ArrayList<>();
-List> columns = new LinkedList<>();
-Cell currentColumnCell = null;
-Pair emptyKV = 
EncodedColumnsUtil.getEmptyKeyValueInfo(
-
EncodedColumnsUtil.getQualifierEncodingScheme(scan));
-List currentColumn = null;
-Set uniqueTimeStamps = new HashSet<>();
-// TODO: From CompactionScanner.formColumns(), see if 
this can be refactored.
-for (Cell cell : resultCells) {
-uniqueTimeStamps.add(cell.getTimestamp());
-if (cell.getType() != Cell.Type.Put) {
-deleteMarkers.add(cell);
-}
-if (CellUtil.matchingColumn(cell, 
QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES,
-emptyKV.getFirst())) {
-continue;
-}
-if (currentColumnCell == null) {
-currentColumn = new LinkedList<>();
-currentColumnCell = cell;
-currentColumn.add(cell);
-} else if (!CellUtil.matchingColumn(cell, 
currentColumnCell)) {
-columns.add(currentColumn);
-currentColumn = new LinkedList<>();
-currentColumnCell = cell;
-currentColumn.add(cell);
-} else {
-currentColumn.add(cell);
-}
+for (Cell cell : resultCells) {
+if (cell.getType() == Cell.Type.DeleteColumn) {
+// DDL is not supported in CDC

Review Comment:
   Also, there can be multiple columns set to NULL in the same  UPSERT 
statements, so we would have to detect and surface these changes as is.





> Extend 

Re: [PR] PHOENIX-7015 Implementing CDCGlobalIndexRegionScanner: JSON Response for CDC [phoenix]

2024-01-16 Thread via GitHub


haridsv commented on code in PR #1794:
URL: https://github.com/apache/phoenix/pull/1794#discussion_r1453723232


##
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/CDCGlobalIndexRegionScanner.java:
##
@@ -104,115 +94,74 @@ protected Scan prepareDataTableScan(Collection 
dataRowKeys) throws IOExc
 protected boolean getNextCoveredIndexRow(List result) throws 
IOException {
 if (indexRowIterator.hasNext()) {
 List indexRow = indexRowIterator.next();
-for (Cell c: indexRow) {
-if (c.getType() == Cell.Type.Put) {
-result.add(c);
-}
-}
+Cell firstCell = indexRow.get(indexRow.size() - 1);
+byte[] indexRowKey = new ImmutableBytesPtr(firstCell.getRowArray(),
+firstCell.getRowOffset(), firstCell.getRowLength())
+.copyBytesIfNecessary();
+ImmutableBytesPtr dataRowKey = new ImmutableBytesPtr(
+indexToDataRowKeyMap.get(indexRowKey));
+Result dataRow = dataRows.get(dataRowKey);
+Long indexCellTs = firstCell.getTimestamp();
+Cell.Type indexCellType = firstCell.getType();
+
+Map preImageObj = new HashMap<>();
+Map changeImageObj = new HashMap<>();
+List resultCells = Arrays.asList(dataRow.rawCells());
+Collections.sort(resultCells, 
CellComparator.getInstance().reversed());
+
+boolean isIndexCellDeleteRow = false;
+boolean isIndexCellDeleteColumn = false;
 try {
-Result dataRow = null;
-if (! result.isEmpty()) {
-Cell firstCell = result.get(0);
-byte[] indexRowKey = new 
ImmutableBytesPtr(firstCell.getRowArray(),
-firstCell.getRowOffset(), firstCell.getRowLength())
-.copyBytesIfNecessary();
-ImmutableBytesPtr dataRowKey = new ImmutableBytesPtr(
-indexToDataRowKeyMap.get(indexRowKey));
-dataRow = dataRows.get(dataRowKey);
-Long indexRowTs = result.get(0).getTimestamp();
-Map> changeTimeline = 
dataRowChanges.get(
-dataRowKey);
-if (changeTimeline == null) {
-List resultCells = 
Arrays.asList(dataRow.rawCells());
-Collections.sort(resultCells, 
CellComparator.getInstance().reversed());
-List deleteMarkers = new ArrayList<>();
-List> columns = new LinkedList<>();
-Cell currentColumnCell = null;
-Pair emptyKV = 
EncodedColumnsUtil.getEmptyKeyValueInfo(
-
EncodedColumnsUtil.getQualifierEncodingScheme(scan));
-List currentColumn = null;
-Set uniqueTimeStamps = new HashSet<>();
-// TODO: From CompactionScanner.formColumns(), see if 
this can be refactored.
-for (Cell cell : resultCells) {
-uniqueTimeStamps.add(cell.getTimestamp());
-if (cell.getType() != Cell.Type.Put) {
-deleteMarkers.add(cell);
-}
-if (CellUtil.matchingColumn(cell, 
QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES,
-emptyKV.getFirst())) {
-continue;
-}
-if (currentColumnCell == null) {
-currentColumn = new LinkedList<>();
-currentColumnCell = cell;
-currentColumn.add(cell);
-} else if (!CellUtil.matchingColumn(cell, 
currentColumnCell)) {
-columns.add(currentColumn);
-currentColumn = new LinkedList<>();
-currentColumnCell = cell;
-currentColumn.add(cell);
-} else {
-currentColumn.add(cell);
-}
+for (Cell cell : resultCells) {
+if (cell.getType() == Cell.Type.DeleteColumn) {
+// DDL is not supported in CDC

Review Comment:
   Also, there can be multiple columns set to NULL in the same  UPSERT 
statements, so we would have to detect and surface these changes as is.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, 

[jira] [Commented] (PHOENIX-7108) Provide support for pruning expired rows of views using Phoenix level compactions

2024-01-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807373#comment-17807373
 ] 

ASF GitHub Bot commented on PHOENIX-7108:
-

jpisaac opened a new pull request, #1799:
URL: https://github.com/apache/phoenix/pull/1799

   (no comment)




> Provide support for pruning expired rows of views using Phoenix level 
> compactions
> -
>
> Key: PHOENIX-7108
> URL: https://issues.apache.org/jira/browse/PHOENIX-7108
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
>
> Modify Phoenix compaction framework introduced in PHOENIX-6888 to prune TTL 
> expired rows of views.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] PHOENIX-7108 Provide support for pruning expired rows of views using Phoenix level compactions [phoenix]

2024-01-16 Thread via GitHub


jpisaac opened a new pull request, #1799:
URL: https://github.com/apache/phoenix/pull/1799

   (no comment)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (PHOENIX-7015) Extend UncoveredGlobalIndexRegionScanner for CDC region scanner usecase

2024-01-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807366#comment-17807366
 ] 

ASF GitHub Bot commented on PHOENIX-7015:
-

haridsv commented on code in PR #1794:
URL: https://github.com/apache/phoenix/pull/1794#discussion_r1452101150


##
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/CDCGlobalIndexRegionScanner.java:
##
@@ -104,115 +94,74 @@ protected Scan prepareDataTableScan(Collection 
dataRowKeys) throws IOExc
 protected boolean getNextCoveredIndexRow(List result) throws 
IOException {
 if (indexRowIterator.hasNext()) {
 List indexRow = indexRowIterator.next();
-for (Cell c: indexRow) {
-if (c.getType() == Cell.Type.Put) {
-result.add(c);
-}
-}
+Cell firstCell = indexRow.get(indexRow.size() - 1);
+byte[] indexRowKey = new ImmutableBytesPtr(firstCell.getRowArray(),
+firstCell.getRowOffset(), firstCell.getRowLength())
+.copyBytesIfNecessary();
+ImmutableBytesPtr dataRowKey = new ImmutableBytesPtr(
+indexToDataRowKeyMap.get(indexRowKey));
+Result dataRow = dataRows.get(dataRowKey);
+Long indexCellTs = firstCell.getTimestamp();
+Cell.Type indexCellType = firstCell.getType();
+
+Map preImageObj = new HashMap<>();
+Map changeImageObj = new HashMap<>();
+List resultCells = Arrays.asList(dataRow.rawCells());
+Collections.sort(resultCells, 
CellComparator.getInstance().reversed());
+
+boolean isIndexCellDeleteRow = false;
+boolean isIndexCellDeleteColumn = false;
 try {
-Result dataRow = null;
-if (! result.isEmpty()) {
-Cell firstCell = result.get(0);
-byte[] indexRowKey = new 
ImmutableBytesPtr(firstCell.getRowArray(),
-firstCell.getRowOffset(), firstCell.getRowLength())
-.copyBytesIfNecessary();
-ImmutableBytesPtr dataRowKey = new ImmutableBytesPtr(
-indexToDataRowKeyMap.get(indexRowKey));
-dataRow = dataRows.get(dataRowKey);
-Long indexRowTs = result.get(0).getTimestamp();
-Map> changeTimeline = 
dataRowChanges.get(
-dataRowKey);
-if (changeTimeline == null) {
-List resultCells = 
Arrays.asList(dataRow.rawCells());
-Collections.sort(resultCells, 
CellComparator.getInstance().reversed());
-List deleteMarkers = new ArrayList<>();
-List> columns = new LinkedList<>();
-Cell currentColumnCell = null;
-Pair emptyKV = 
EncodedColumnsUtil.getEmptyKeyValueInfo(
-
EncodedColumnsUtil.getQualifierEncodingScheme(scan));
-List currentColumn = null;
-Set uniqueTimeStamps = new HashSet<>();
-// TODO: From CompactionScanner.formColumns(), see if 
this can be refactored.
-for (Cell cell : resultCells) {
-uniqueTimeStamps.add(cell.getTimestamp());
-if (cell.getType() != Cell.Type.Put) {
-deleteMarkers.add(cell);
-}
-if (CellUtil.matchingColumn(cell, 
QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES,
-emptyKV.getFirst())) {
-continue;
-}
-if (currentColumnCell == null) {
-currentColumn = new LinkedList<>();
-currentColumnCell = cell;
-currentColumn.add(cell);
-} else if (!CellUtil.matchingColumn(cell, 
currentColumnCell)) {
-columns.add(currentColumn);
-currentColumn = new LinkedList<>();
-currentColumnCell = cell;
-currentColumn.add(cell);
-} else {
-currentColumn.add(cell);
-}
+for (Cell cell : resultCells) {
+if (cell.getType() == Cell.Type.DeleteColumn) {
+// DDL is not supported in CDC
+if (cell.getTimestamp() == indexCellTs) {
+isIndexCellDeleteColumn = true;
+break;

Review Comment:
   

Re: [PR] PHOENIX-7015 Implementing CDCGlobalIndexRegionScanner: JSON Response for CDC [phoenix]

2024-01-16 Thread via GitHub


haridsv commented on code in PR #1794:
URL: https://github.com/apache/phoenix/pull/1794#discussion_r1452101150


##
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/CDCGlobalIndexRegionScanner.java:
##
@@ -104,115 +94,74 @@ protected Scan prepareDataTableScan(Collection 
dataRowKeys) throws IOExc
 protected boolean getNextCoveredIndexRow(List result) throws 
IOException {
 if (indexRowIterator.hasNext()) {
 List indexRow = indexRowIterator.next();
-for (Cell c: indexRow) {
-if (c.getType() == Cell.Type.Put) {
-result.add(c);
-}
-}
+Cell firstCell = indexRow.get(indexRow.size() - 1);
+byte[] indexRowKey = new ImmutableBytesPtr(firstCell.getRowArray(),
+firstCell.getRowOffset(), firstCell.getRowLength())
+.copyBytesIfNecessary();
+ImmutableBytesPtr dataRowKey = new ImmutableBytesPtr(
+indexToDataRowKeyMap.get(indexRowKey));
+Result dataRow = dataRows.get(dataRowKey);
+Long indexCellTs = firstCell.getTimestamp();
+Cell.Type indexCellType = firstCell.getType();
+
+Map preImageObj = new HashMap<>();
+Map changeImageObj = new HashMap<>();
+List resultCells = Arrays.asList(dataRow.rawCells());
+Collections.sort(resultCells, 
CellComparator.getInstance().reversed());
+
+boolean isIndexCellDeleteRow = false;
+boolean isIndexCellDeleteColumn = false;
 try {
-Result dataRow = null;
-if (! result.isEmpty()) {
-Cell firstCell = result.get(0);
-byte[] indexRowKey = new 
ImmutableBytesPtr(firstCell.getRowArray(),
-firstCell.getRowOffset(), firstCell.getRowLength())
-.copyBytesIfNecessary();
-ImmutableBytesPtr dataRowKey = new ImmutableBytesPtr(
-indexToDataRowKeyMap.get(indexRowKey));
-dataRow = dataRows.get(dataRowKey);
-Long indexRowTs = result.get(0).getTimestamp();
-Map> changeTimeline = 
dataRowChanges.get(
-dataRowKey);
-if (changeTimeline == null) {
-List resultCells = 
Arrays.asList(dataRow.rawCells());
-Collections.sort(resultCells, 
CellComparator.getInstance().reversed());
-List deleteMarkers = new ArrayList<>();
-List> columns = new LinkedList<>();
-Cell currentColumnCell = null;
-Pair emptyKV = 
EncodedColumnsUtil.getEmptyKeyValueInfo(
-
EncodedColumnsUtil.getQualifierEncodingScheme(scan));
-List currentColumn = null;
-Set uniqueTimeStamps = new HashSet<>();
-// TODO: From CompactionScanner.formColumns(), see if 
this can be refactored.
-for (Cell cell : resultCells) {
-uniqueTimeStamps.add(cell.getTimestamp());
-if (cell.getType() != Cell.Type.Put) {
-deleteMarkers.add(cell);
-}
-if (CellUtil.matchingColumn(cell, 
QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES,
-emptyKV.getFirst())) {
-continue;
-}
-if (currentColumnCell == null) {
-currentColumn = new LinkedList<>();
-currentColumnCell = cell;
-currentColumn.add(cell);
-} else if (!CellUtil.matchingColumn(cell, 
currentColumnCell)) {
-columns.add(currentColumn);
-currentColumn = new LinkedList<>();
-currentColumnCell = cell;
-currentColumn.add(cell);
-} else {
-currentColumn.add(cell);
-}
+for (Cell cell : resultCells) {
+if (cell.getType() == Cell.Type.DeleteColumn) {
+// DDL is not supported in CDC
+if (cell.getTimestamp() == indexCellTs) {
+isIndexCellDeleteColumn = true;
+break;

Review Comment:
   You can have multiple `DeleteColumn` cells corresponding to different 
columns in the same UPSERT.



##
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/CDCGlobalIndexRegionScanner.java:
##
@@ -104,115 +94,74 @@ 

[jira] [Commented] (PHOENIX-7176) QueryTimeoutIT#testQueryTimeout fails with incorrect error message

2024-01-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807216#comment-17807216
 ] 

ASF GitHub Bot commented on PHOENIX-7176:
-

stoty closed pull request #1798: PHOENIX-7176 QueryTimeoutIT#testQueryTimeout 
fails with incorrect err…
URL: https://github.com/apache/phoenix/pull/1798




> QueryTimeoutIT#testQueryTimeout fails with incorrect error message
> --
>
> Key: PHOENIX-7176
> URL: https://issues.apache.org/jira/browse/PHOENIX-7176
> Project: Phoenix
>  Issue Type: Bug
>  Components: phoenix
>Reporter: Aron Attila Meszaros
>Assignee: Aron Attila Meszaros
>Priority: Major
> Fix For: 5.2.0, 5.1.4
>
>
> QueryTimeoutIT sometimes fails with incorrect error message: E.g. "Total time 
> of query was 1224 ms, but expected to be greater than 1000". 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7176) QueryTimeoutIT#testQueryTimeout fails with incorrect error message

2024-01-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807215#comment-17807215
 ] 

ASF GitHub Bot commented on PHOENIX-7176:
-

stoty commented on PR #1798:
URL: https://github.com/apache/phoenix/pull/1798#issuecomment-1893639297

   merged manually




> QueryTimeoutIT#testQueryTimeout fails with incorrect error message
> --
>
> Key: PHOENIX-7176
> URL: https://issues.apache.org/jira/browse/PHOENIX-7176
> Project: Phoenix
>  Issue Type: Bug
>  Components: phoenix
>Reporter: Aron Attila Meszaros
>Assignee: Aron Attila Meszaros
>Priority: Major
> Fix For: 5.2.0, 5.1.4
>
>
> QueryTimeoutIT sometimes fails with incorrect error message: E.g. "Total time 
> of query was 1224 ms, but expected to be greater than 1000". 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] PHOENIX-7176 QueryTimeoutIT#testQueryTimeout fails with incorrect err… [phoenix]

2024-01-16 Thread via GitHub


stoty closed pull request #1798: PHOENIX-7176 QueryTimeoutIT#testQueryTimeout 
fails with incorrect err…
URL: https://github.com/apache/phoenix/pull/1798


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] PHOENIX-7176 QueryTimeoutIT#testQueryTimeout fails with incorrect err… [phoenix]

2024-01-16 Thread via GitHub


stoty commented on PR #1798:
URL: https://github.com/apache/phoenix/pull/1798#issuecomment-1893639297

   merged manually


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (PHOENIX-7176) QueryTimeoutIT#testQueryTimeout fails with incorrect error message

2024-01-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807172#comment-17807172
 ] 

ASF GitHub Bot commented on PHOENIX-7176:
-

Aarchy commented on PR #1798:
URL: https://github.com/apache/phoenix/pull/1798#issuecomment-1893467610

   The test still fails sometimes, but on another assert, which checks whether 
the elapsed time was greater than the timeout set. 
   Logging elapsedTime showed that it was equal to the timeout (1000 ms). 
   
   I'm wondering if this is caused by relying on System.currentTimeMillis() 
instead of system.nanotime(). Using (>=) in the assert seems to fix it though. 
   
   




> QueryTimeoutIT#testQueryTimeout fails with incorrect error message
> --
>
> Key: PHOENIX-7176
> URL: https://issues.apache.org/jira/browse/PHOENIX-7176
> Project: Phoenix
>  Issue Type: Bug
>  Components: phoenix
>Reporter: Aron Attila Meszaros
>Assignee: Aron Attila Meszaros
>Priority: Major
> Fix For: 5.2.0, 5.1.4
>
>
> QueryTimeoutIT sometimes fails with incorrect error message: E.g. "Total time 
> of query was 1224 ms, but expected to be greater than 1000". 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] PHOENIX-7176 QueryTimeoutIT#testQueryTimeout fails with incorrect err… [phoenix]

2024-01-16 Thread via GitHub


Aarchy commented on PR #1798:
URL: https://github.com/apache/phoenix/pull/1798#issuecomment-1893467610

   The test still fails sometimes, but on another assert, which checks whether 
the elapsed time was greater than the timeout set. 
   Logging elapsedTime showed that it was equal to the timeout (1000 ms). 
   
   I'm wondering if this is caused by relying on System.currentTimeMillis() 
instead of system.nanotime(). Using (>=) in the assert seems to fix it though. 
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (PHOENIX-7176) QueryTimeoutIT#testQueryTimeout fails with incorrect error message

2024-01-16 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17807171#comment-17807171
 ] 

ASF GitHub Bot commented on PHOENIX-7176:
-

Aarchy opened a new pull request, #1798:
URL: https://github.com/apache/phoenix/pull/1798

   …or message (addendum: modify & add message to assert)
   
   




> QueryTimeoutIT#testQueryTimeout fails with incorrect error message
> --
>
> Key: PHOENIX-7176
> URL: https://issues.apache.org/jira/browse/PHOENIX-7176
> Project: Phoenix
>  Issue Type: Bug
>  Components: phoenix
>Reporter: Aron Attila Meszaros
>Assignee: Aron Attila Meszaros
>Priority: Major
> Fix For: 5.2.0, 5.1.4
>
>
> QueryTimeoutIT sometimes fails with incorrect error message: E.g. "Total time 
> of query was 1224 ms, but expected to be greater than 1000". 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] PHOENIX-7176 QueryTimeoutIT#testQueryTimeout fails with incorrect err… [phoenix]

2024-01-16 Thread via GitHub


Aarchy opened a new pull request, #1798:
URL: https://github.com/apache/phoenix/pull/1798

   …or message (addendum: modify & add message to assert)
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org