[jira] [Commented] (PHOENIX-4224) Automatic resending cache for HashJoin doesn't work when cache has expired on server side

2017-09-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180290#comment-16180290
 ] 

Hadoop QA commented on PHOENIX-4224:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12888990/PHOENIX-4224-1.patch
  against master branch at commit 944bed73585a5ff826997895c2da43720b229d8a.
  ATTACHMENT ID: 12888990

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+QueryServices.MAX_SERVER_CACHE_TIME_TO_LIVE_MS_ATTRIB, 
QueryServicesOptions.DEFAULT_MAX_SERVER_CACHE_TIME_TO_LIVE_MS);
+return true; // cache was send more than maxTTL ms ago, 
expecting that it's expired

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.UpsertValuesIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1478//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1478//console

This message is automatically generated.

> Automatic resending cache for HashJoin doesn't work when cache has expired on 
> server side 
> --
>
> Key: PHOENIX-4224
> URL: https://issues.apache.org/jira/browse/PHOENIX-4224
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Blocker
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4224-1.patch
>
>
> The problem occurs when the cache has expired on server side and client want 
> to resend it. This problem has been introduced in PHOENIX-4010. Actual result 
> in this case is that client doesn't send the cache because of the following 
> check:
> {noformat}
>   if (cache.addServer(tableRegionLocation) ... )) {
>   success = addServerCache(table, 
> startkeyOfRegion, pTable, cacheId, cache.getCachePtr(), cacheFactory, 
> txState);
>   }
> {noformat}
> Since the region location hasn't been changed, we actually don't send cache 
> again, but produce new scanner which will fail with the same error and client 
> will fall to recursion. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-3815) Only disable indexes on which write failures occurred

2017-09-25 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3815:
--
Attachment: PHOENIX-3815_v3.patch

Rebased patch and made the following changes:
- Use ParallelWriterIndexCommitter for transactional tables since we don't need 
to track disabled tables in that case
- Added some  more testing around the index that shouldn't be disabled (to make 
sure it's valid against the data table after write failures to other table.

I measured run time of MutableIndexFailureIT with and without the patch and 
there's not much difference (4:49 vs 5:05). This would be explained with the 
extra writes to the new index table.

Unless there are objects, I'll go ahead and commit this version, [~vincentpoon].

> Only disable indexes on which write failures occurred
> -
>
> Key: PHOENIX-3815
> URL: https://issues.apache.org/jira/browse/PHOENIX-3815
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Vincent Poon
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3815.0.98.v2.patch, 
> PHOENIX-3815.master.v2.patch, PHOENIX-3815.v1.patch, PHOENIX-3815_v3.patch
>
>
> We currently disable all indexes if any of them fail to be written to. We 
> really only should disable the one in which the write failed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4233) IndexScrutiny test tool does not work for salted and shared index tables

2017-09-25 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4233:
--
Attachment: PHOENIX-4233.patch

Something to check for the MR-based scrutiny tool, [~vincentpoon].

> IndexScrutiny test tool does not work for salted and shared index tables
> 
>
> Key: PHOENIX-4233
> URL: https://issues.apache.org/jira/browse/PHOENIX-4233
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4233.patch
>
>
> Our IndexScrutiny test-only tool does not handle salted tables or local or 
> view indexes correctly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4233) IndexScrutiny test tool does not work for salted and shared index tables

2017-09-25 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4233:
-

 Summary: IndexScrutiny test tool does not work for salted and 
shared index tables
 Key: PHOENIX-4233
 URL: https://issues.apache.org/jira/browse/PHOENIX-4233
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor


Our IndexScrutiny test-only tool does not handle salted tables or local or view 
indexes correctly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-4233) IndexScrutiny test tool does not work for salted and shared index tables

2017-09-25 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-4233:
-

Assignee: James Taylor

> IndexScrutiny test tool does not work for salted and shared index tables
> 
>
> Key: PHOENIX-4233
> URL: https://issues.apache.org/jira/browse/PHOENIX-4233
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
>
> Our IndexScrutiny test-only tool does not handle salted tables or local or 
> view indexes correctly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4233) IndexScrutiny test tool does not work for salted and shared index tables

2017-09-25 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4233:
--
Fix Version/s: 4.12.0

> IndexScrutiny test tool does not work for salted and shared index tables
> 
>
> Key: PHOENIX-4233
> URL: https://issues.apache.org/jira/browse/PHOENIX-4233
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Fix For: 4.12.0
>
>
> Our IndexScrutiny test-only tool does not handle salted tables or local or 
> view indexes correctly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4224) Automatic resending cache for HashJoin doesn't work when cache has expired on server side

2017-09-25 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180217#comment-16180217
 ] 

Ankit Singhal commented on PHOENIX-4224:


bq. From the end user perspective that looks like the query fail by timeout or 
crash with StackOverflow exception
It should not crash with StackOverflow exception as the number of 
retries/recursion is limited to hashjoin.client.retries.number(default is 5).

Anyways, wasting even single retry for this particular case where the cache at 
the server is expired is unnecessary overhead. +1 for the fix. 



> Automatic resending cache for HashJoin doesn't work when cache has expired on 
> server side 
> --
>
> Key: PHOENIX-4224
> URL: https://issues.apache.org/jira/browse/PHOENIX-4224
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Blocker
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4224-1.patch
>
>
> The problem occurs when the cache has expired on server side and client want 
> to resend it. This problem has been introduced in PHOENIX-4010. Actual result 
> in this case is that client doesn't send the cache because of the following 
> check:
> {noformat}
>   if (cache.addServer(tableRegionLocation) ... )) {
>   success = addServerCache(table, 
> startkeyOfRegion, pTable, cacheId, cache.getCachePtr(), cacheFactory, 
> txState);
>   }
> {noformat}
> Since the region location hasn't been changed, we actually don't send cache 
> again, but produce new scanner which will fail with the same error and client 
> will fall to recursion. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4230) Write index updates in postBatchMutateIndispensably for transactional tables

2017-09-25 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180166#comment-16180166
 ] 

Josh Elser commented on PHOENIX-4230:
-

bq. something seems wrong with our pre commit build. This patch applies fine 
and compiles against master. Any ideas?

[~jamestaylor] See INFRA-15074 (should have been dealt with last week, but I 
forgot to click the button for "Waiting on Infra" -- sorry). It looks like H16 
runs "hot" and close to disk capacity. When it's (near) full, Maven can't 
actually delete the target directory and we see the above failure (unable to 
clean).

> Write index updates in postBatchMutateIndispensably for transactional tables
> 
>
> Key: PHOENIX-4230
> URL: https://issues.apache.org/jira/browse/PHOENIX-4230
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-4230.patch, PHOENIX-4230_v2.patch
>
>
> This change was already made for non transactional tables. We should make the 
> same change for transactional tables to prevent RPCs while rows are locked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4230) Write index updates in postBatchMutateIndispensably for transactional tables

2017-09-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180156#comment-16180156
 ] 

Hadoop QA commented on PHOENIX-4230:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12888979/PHOENIX-4230_v2.patch
  against master branch at commit 944bed73585a5ff826997895c2da43720b229d8a.
  ATTACHMENT ID: 12888979

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+// lower the number of rpc retries.  We inherit config from 
HConnectionManager#setServerSideHConnectionRetries,
+// which by default uses a multiplier of 10.  That is too many retries 
for our synchronous index writes
+context.indexUpdates = getIndexUpdates(c.getEnvironment(), 
indexMetaData, getMutationIterator(miniBatchOp), txRollbackAttribute);
+MiniBatchOperationInProgress miniBatchOp, final boolean 
success) throws IOException {
+private void 
setBatchMutateContext(ObserverContext c, 
BatchMutateContext context) {
+private BatchMutateContext 
getBatchMutateContext(ObserverContext c) {
+private static void addMutation(Map 
mutations, ImmutableBytesPtr row, Mutation m) {

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.QueryWithTableSampleIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1477//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1477//console

This message is automatically generated.

> Write index updates in postBatchMutateIndispensably for transactional tables
> 
>
> Key: PHOENIX-4230
> URL: https://issues.apache.org/jira/browse/PHOENIX-4230
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-4230.patch, PHOENIX-4230_v2.patch
>
>
> This change was already made for non transactional tables. We should make the 
> same change for transactional tables to prevent RPCs while rows are locked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4224) Automatic resending cache for HashJoin doesn't work when cache has expired on server side

2017-09-25 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-4224:
-
Attachment: PHOENIX-4224-1.patch

simple version that tracks whether cache was supposed to be expired on the 
server (in that case we immediately fail) or we really need to send the cache. 

> Automatic resending cache for HashJoin doesn't work when cache has expired on 
> server side 
> --
>
> Key: PHOENIX-4224
> URL: https://issues.apache.org/jira/browse/PHOENIX-4224
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Blocker
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4224-1.patch
>
>
> The problem occurs when the cache has expired on server side and client want 
> to resend it. This problem has been introduced in PHOENIX-4010. Actual result 
> in this case is that client doesn't send the cache because of the following 
> check:
> {noformat}
>   if (cache.addServer(tableRegionLocation) ... )) {
>   success = addServerCache(table, 
> startkeyOfRegion, pTable, cacheId, cache.getCachePtr(), cacheFactory, 
> txState);
>   }
> {noformat}
> Since the region location hasn't been changed, we actually don't send cache 
> again, but produce new scanner which will fail with the same error and client 
> will fall to recursion. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-09-25 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180105#comment-16180105
 ] 

Josh Elser commented on PHOENIX-3757:
-

[~karanmehta93], go for it!

I honestly don't remember - I'd have to refresh myself, it's been so long.

I recall Samarth had asked to try to change the IT's base class. I was worried 
not using a fresh HBase cluster would result in us not actually testing what we 
think we're testing (if the system tables exists or the PhoenixDriver has any 
cached info). This requires some insight.

I think Ankit had some edge cases which I wasn't correctly handling -- JIRA 
will serve as a better record than my memory for sure.

> System mutex table not being created in SYSTEM namespace when namespace 
> mapping is enabled
> --
>
> Key: PHOENIX-3757
> URL: https://issues.apache.org/jira/browse/PHOENIX-3757
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
>  Labels: namespaces
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3757.001.patch, PHOENIX-3757.002.patch
>
>
> Noticed this issue while writing a test for PHOENIX-3756:
> The SYSTEM.MUTEX table is always created in the default namespace, even when 
> {{phoenix.schema.isNamespaceMappingEnabled=true}}. At a glance, it looks like 
> the logic for the other system tables isn't applied to the mutex table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4224) Automatic resending cache for HashJoin doesn't work when cache has expired on server side

2017-09-25 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16180098#comment-16180098
 ] 

Josh Elser commented on PHOENIX-4224:
-

bq. Playing with different scenarios for large joins I see that in some cases 
it may be useful and the query would return result instead of failing. In other 
hands it's more preferable if we fail quickly, letting user know that the cache 
TTL should be adjusted. 

Oof. That's rough. I'm trying to come up with what the "bandaid" fix would be 
(to avoid a revert and not block 4.12). Maybe fail quickly with a good error 
message?

bq. Working on the fix where we keep tracking when the cache has been sent to 
the servers, so we will be able to separate the cases when it was expired from 
the cases when it was never sent ( due the region movement across the region 
servers). 

You close to this kind of fix? Should 4.12 wait for it? (half a question for 
Sergey, half for James)

> Automatic resending cache for HashJoin doesn't work when cache has expired on 
> server side 
> --
>
> Key: PHOENIX-4224
> URL: https://issues.apache.org/jira/browse/PHOENIX-4224
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Blocker
> Fix For: 4.12.0
>
>
> The problem occurs when the cache has expired on server side and client want 
> to resend it. This problem has been introduced in PHOENIX-4010. Actual result 
> in this case is that client doesn't send the cache because of the following 
> check:
> {noformat}
>   if (cache.addServer(tableRegionLocation) ... )) {
>   success = addServerCache(table, 
> startkeyOfRegion, pTable, cacheId, cache.getCachePtr(), cacheFactory, 
> txState);
>   }
> {noformat}
> Since the region location hasn't been changed, we actually don't send cache 
> again, but produce new scanner which will fail with the same error and client 
> will fall to recursion. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4230) Write index updates in postBatchMutateIndispensably for transactional tables

2017-09-25 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4230:
--
Attachment: PHOENIX-4230_v2.patch

Final patch.

> Write index updates in postBatchMutateIndispensably for transactional tables
> 
>
> Key: PHOENIX-4230
> URL: https://issues.apache.org/jira/browse/PHOENIX-4230
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-4230.patch, PHOENIX-4230_v2.patch
>
>
> This change was already made for non transactional tables. We should make the 
> same change for transactional tables to prevent RPCs while rows are locked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3757) System mutex table not being created in SYSTEM namespace when namespace mapping is enabled

2017-09-25 Thread Karan Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179988#comment-16179988
 ] 

Karan Mehta commented on PHOENIX-3757:
--

[~elserj] I can take this up. Can you provide me a quick overview of where you 
left the patch. I will also try to read it up today.

> System mutex table not being created in SYSTEM namespace when namespace 
> mapping is enabled
> --
>
> Key: PHOENIX-3757
> URL: https://issues.apache.org/jira/browse/PHOENIX-3757
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
>  Labels: namespaces
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3757.001.patch, PHOENIX-3757.002.patch
>
>
> Noticed this issue while writing a test for PHOENIX-3756:
> The SYSTEM.MUTEX table is always created in the default namespace, even when 
> {{phoenix.schema.isNamespaceMappingEnabled=true}}. At a glance, it looks like 
> the logic for the other system tables isn't applied to the mutex table.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4224) Automatic resending cache for HashJoin doesn't work when cache has expired on server side

2017-09-25 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179984#comment-16179984
 ] 

Sergey Soldatov commented on PHOENIX-4224:
--

[~jamestaylor] I think we can fix it. Working on the fix where we keep tracking 
when the cache has been sent to the servers, so we will be able to separate the 
cases when it was expired from the cases when it was never sent ( due the 
region movement across the region servers).  

> Automatic resending cache for HashJoin doesn't work when cache has expired on 
> server side 
> --
>
> Key: PHOENIX-4224
> URL: https://issues.apache.org/jira/browse/PHOENIX-4224
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Blocker
> Fix For: 4.12.0
>
>
> The problem occurs when the cache has expired on server side and client want 
> to resend it. This problem has been introduced in PHOENIX-4010. Actual result 
> in this case is that client doesn't send the cache because of the following 
> check:
> {noformat}
>   if (cache.addServer(tableRegionLocation) ... )) {
>   success = addServerCache(table, 
> startkeyOfRegion, pTable, cacheId, cache.getCachePtr(), cacheFactory, 
> txState);
>   }
> {noformat}
> Since the region location hasn't been changed, we actually don't send cache 
> again, but produce new scanner which will fail with the same error and client 
> will fall to recursion. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4230) Write index updates in postBatchMutateIndispensably for transactional tables

2017-09-25 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179960#comment-16179960
 ] 

Thomas D'Silva commented on PHOENIX-4230:
-

+1

> Write index updates in postBatchMutateIndispensably for transactional tables
> 
>
> Key: PHOENIX-4230
> URL: https://issues.apache.org/jira/browse/PHOENIX-4230
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-4230.patch
>
>
> This change was already made for non transactional tables. We should make the 
> same change for transactional tables to prevent RPCs while rows are locked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4224) Automatic resending cache for HashJoin doesn't work when cache has expired on server side

2017-09-25 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4224:
--
Priority: Blocker  (was: Major)

> Automatic resending cache for HashJoin doesn't work when cache has expired on 
> server side 
> --
>
> Key: PHOENIX-4224
> URL: https://issues.apache.org/jira/browse/PHOENIX-4224
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Blocker
> Fix For: 4.12.0
>
>
> The problem occurs when the cache has expired on server side and client want 
> to resend it. This problem has been introduced in PHOENIX-4010. Actual result 
> in this case is that client doesn't send the cache because of the following 
> check:
> {noformat}
>   if (cache.addServer(tableRegionLocation) ... )) {
>   success = addServerCache(table, 
> startkeyOfRegion, pTable, cacheId, cache.getCachePtr(), cacheFactory, 
> txState);
>   }
> {noformat}
> Since the region location hasn't been changed, we actually don't send cache 
> again, but produce new scanner which will fail with the same error and client 
> will fall to recursion. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4224) Automatic resending cache for HashJoin doesn't work when cache has expired on server side

2017-09-25 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179893#comment-16179893
 ] 

James Taylor commented on PHOENIX-4224:
---

Ah, ok. Thank, [~sergey.soldatov] - didn't realize that. Maybe we should revert 
PHOENIX-4010?

> Automatic resending cache for HashJoin doesn't work when cache has expired on 
> server side 
> --
>
> Key: PHOENIX-4224
> URL: https://issues.apache.org/jira/browse/PHOENIX-4224
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Fix For: 4.12.0
>
>
> The problem occurs when the cache has expired on server side and client want 
> to resend it. This problem has been introduced in PHOENIX-4010. Actual result 
> in this case is that client doesn't send the cache because of the following 
> check:
> {noformat}
>   if (cache.addServer(tableRegionLocation) ... )) {
>   success = addServerCache(table, 
> startkeyOfRegion, pTable, cacheId, cache.getCachePtr(), cacheFactory, 
> txState);
>   }
> {noformat}
> Since the region location hasn't been changed, we actually don't send cache 
> again, but produce new scanner which will fail with the same error and client 
> will fall to recursion. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4224) Automatic resending cache for HashJoin doesn't work when cache has expired on server side

2017-09-25 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179890#comment-16179890
 ] 

Sergey Soldatov commented on PHOENIX-4224:
--

[~jamestaylor] That's definitely a regression. Previously client got the 
exception that cache has expired when that happen on RS side.  Now it falls 
into resending logic, but doesn't resend the cache, only recreate scans that do 
nothing and creates excessive network traffic as well as a load on region 
servers. From the end user perspective that looks like the query fail by 
timeout or crash with StackOverflow exception. To figure out that TTL should be 
adjusted he/she need to check RS logs.  

> Automatic resending cache for HashJoin doesn't work when cache has expired on 
> server side 
> --
>
> Key: PHOENIX-4224
> URL: https://issues.apache.org/jira/browse/PHOENIX-4224
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Fix For: 4.12.0
>
>
> The problem occurs when the cache has expired on server side and client want 
> to resend it. This problem has been introduced in PHOENIX-4010. Actual result 
> in this case is that client doesn't send the cache because of the following 
> check:
> {noformat}
>   if (cache.addServer(tableRegionLocation) ... )) {
>   success = addServerCache(table, 
> startkeyOfRegion, pTable, cacheId, cache.getCachePtr(), cacheFactory, 
> txState);
>   }
> {noformat}
> Since the region location hasn't been changed, we actually don't send cache 
> again, but produce new scanner which will fail with the same error and client 
> will fall to recursion. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3815) Only disable indexes on which write failures occurred

2017-09-25 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179874#comment-16179874
 ] 

James Taylor commented on PHOENIX-3815:
---

Ping [~vincentpoon]. Any insights? See previous comment.

> Only disable indexes on which write failures occurred
> -
>
> Key: PHOENIX-3815
> URL: https://issues.apache.org/jira/browse/PHOENIX-3815
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Vincent Poon
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3815.0.98.v2.patch, 
> PHOENIX-3815.master.v2.patch, PHOENIX-3815.v1.patch
>
>
> We currently disable all indexes if any of them fail to be written to. We 
> really only should disable the one in which the write failed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4224) Automatic resending cache for HashJoin doesn't work when cache has expired on server side

2017-09-25 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179860#comment-16179860
 ] 

James Taylor commented on PHOENIX-4224:
---

[~elserj] - this doesn't sound like a blocker (or a regression), so lower the 
priority. Please let me know if you disagree.

> Automatic resending cache for HashJoin doesn't work when cache has expired on 
> server side 
> --
>
> Key: PHOENIX-4224
> URL: https://issues.apache.org/jira/browse/PHOENIX-4224
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Fix For: 4.12.0
>
>
> The problem occurs when the cache has expired on server side and client want 
> to resend it. This problem has been introduced in PHOENIX-4010. Actual result 
> in this case is that client doesn't send the cache because of the following 
> check:
> {noformat}
>   if (cache.addServer(tableRegionLocation) ... )) {
>   success = addServerCache(table, 
> startkeyOfRegion, pTable, cacheId, cache.getCachePtr(), cacheFactory, 
> txState);
>   }
> {noformat}
> Since the region location hasn't been changed, we actually don't send cache 
> again, but produce new scanner which will fail with the same error and client 
> will fall to recursion. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4224) Automatic resending cache for HashJoin doesn't work when cache has expired on server side

2017-09-25 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4224:
--
Priority: Major  (was: Blocker)

> Automatic resending cache for HashJoin doesn't work when cache has expired on 
> server side 
> --
>
> Key: PHOENIX-4224
> URL: https://issues.apache.org/jira/browse/PHOENIX-4224
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
> Fix For: 4.12.0
>
>
> The problem occurs when the cache has expired on server side and client want 
> to resend it. This problem has been introduced in PHOENIX-4010. Actual result 
> in this case is that client doesn't send the cache because of the following 
> check:
> {noformat}
>   if (cache.addServer(tableRegionLocation) ... )) {
>   success = addServerCache(table, 
> startkeyOfRegion, pTable, cacheId, cache.getCachePtr(), cacheFactory, 
> txState);
>   }
> {noformat}
> Since the region location hasn't been changed, we actually don't send cache 
> again, but produce new scanner which will fail with the same error and client 
> will fall to recursion. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4224) Automatic resending cache for HashJoin doesn't work when cache has expired on server side

2017-09-25 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179843#comment-16179843
 ] 

Sergey Soldatov commented on PHOENIX-4224:
--

I'm still thinking about whether we need to resend the cache to the servers 
that really have it's expired. Playing with different scenarios for large joins 
I see that in some cases it may be useful and the query would return result 
instead of failing. In other hands it's more preferable if we fail quickly, 
letting user know that the cache TTL should be adjusted.   

> Automatic resending cache for HashJoin doesn't work when cache has expired on 
> server side 
> --
>
> Key: PHOENIX-4224
> URL: https://issues.apache.org/jira/browse/PHOENIX-4224
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Blocker
> Fix For: 4.12.0
>
>
> The problem occurs when the cache has expired on server side and client want 
> to resend it. This problem has been introduced in PHOENIX-4010. Actual result 
> in this case is that client doesn't send the cache because of the following 
> check:
> {noformat}
>   if (cache.addServer(tableRegionLocation) ... )) {
>   success = addServerCache(table, 
> startkeyOfRegion, pTable, cacheId, cache.getCachePtr(), cacheFactory, 
> txState);
>   }
> {noformat}
> Since the region location hasn't been changed, we actually don't send cache 
> again, but produce new scanner which will fail with the same error and client 
> will fall to recursion. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4230) Write index updates in postBatchMutateIndispensably for transactional tables

2017-09-25 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179818#comment-16179818
 ] 

James Taylor commented on PHOENIX-4230:
---

[~elserj] - something seems wrong with our pre commit build. This patch applies 
fine and compiles against master. Any ideas?

> Write index updates in postBatchMutateIndispensably for transactional tables
> 
>
> Key: PHOENIX-4230
> URL: https://issues.apache.org/jira/browse/PHOENIX-4230
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-4230.patch
>
>
> This change was already made for non transactional tables. We should make the 
> same change for transactional tables to prevent RPCs while rows are locked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4230) Write index updates in postBatchMutateIndispensably for transactional tables

2017-09-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179805#comment-16179805
 ] 

Hadoop QA commented on PHOENIX-4230:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12888952/PHOENIX-4230.patch
  against master branch at commit 944bed73585a5ff826997895c2da43720b229d8a.
  ATTACHMENT ID: 12888952

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 javac{color}.  The patch appears to cause mvn compile goal to 
fail .

Compilation errors resume:
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-clean-plugin:2.5:clean (default-clean) on 
project phoenix-core: Failed to clean project: Failed to delete 
/home/jenkins/jenkins-slave/workspace/PreCommit-PHOENIX-Build/phoenix-core/target
 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :phoenix-core


Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1476//console

This message is automatically generated.

> Write index updates in postBatchMutateIndispensably for transactional tables
> 
>
> Key: PHOENIX-4230
> URL: https://issues.apache.org/jira/browse/PHOENIX-4230
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-4230.patch
>
>
> This change was already made for non transactional tables. We should make the 
> same change for transactional tables to prevent RPCs while rows are locked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4230) Write index updates in postBatchMutateIndispensably for transactional tables

2017-09-25 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4230:
--
Attachment: PHOENIX-4230.patch

Please review, [~tdsilva]. This is the equivalent change we made on the non 
transactional side, just putting it here as well since it was an important one 
for cluster health.

> Write index updates in postBatchMutateIndispensably for transactional tables
> 
>
> Key: PHOENIX-4230
> URL: https://issues.apache.org/jira/browse/PHOENIX-4230
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-4230.patch
>
>
> This change was already made for non transactional tables. We should make the 
> same change for transactional tables to prevent RPCs while rows are locked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4214) Scans which write should not block region split or close

2017-09-25 Thread churro morales (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179728#comment-16179728
 ] 

churro morales commented on PHOENIX-4214:
-

+1 lgtm 

> Scans which write should not block region split or close
> 
>
> Key: PHOENIX-4214
> URL: https://issues.apache.org/jira/browse/PHOENIX-4214
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: PHOENIX-4214.master.v1.patch, 
> splitDuringUpsertSelect_wip.patch
>
>
> PHOENIX-3111 introduced a scan reference counter which is checked during 
> region preSplit and preClose.  However, a steady stream of UPSERT SELECT or 
> DELETE can keep the count above 0 indefinitely, preventing or greatly 
> delaying a region split or close.
> We should try to avoid starvation of the split / close request, and 
> fail/reject queries where appropriate.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4232) Hide shadow cell and commit table access in TAL

2017-09-25 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179686#comment-16179686
 ] 

James Taylor commented on PHOENIX-4232:
---

FYI, [~ohads].

> Hide shadow cell and commit table access in TAL
> ---
>
> Key: PHOENIX-4232
> URL: https://issues.apache.org/jira/browse/PHOENIX-4232
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>  Labels: omid
>
> Omid needs to project the shadow cell column qualifier and then based on the 
> value, filter the row. If the shadow cell is not found, it needs to perform a 
> lookup in the commit table (the source of truth) to get the information 
> instead. For the Phoenix integration, there are likely two TAL methods that 
> can be added to handle this:
> # Add method call to new TAL method in preScannerOpen call on coprocessor 
> that projects the shadow cell qualifiers and sets the time range. This is 
> equivalent to the TransactionProcessor.preScannerOpen that Tephra does. It's 
> possible this work could be done on the client side as well, but it's more 
> likely that the stuff that Phoenix does may override this (but we could get 
> it to work if need be).
> # Add TAL method that returns a RegionScanner to abstract out the filtering 
> of the row (potentially querying commit table). This RegionScanner would be 
> added as the first in the chain in the 
> NonAggregateRegionScannerFactory.getRegionScanner() API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4232) Hide shadow cell and commit table access in TAL

2017-09-25 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4232:
-

 Summary: Hide shadow cell and commit table access in TAL
 Key: PHOENIX-4232
 URL: https://issues.apache.org/jira/browse/PHOENIX-4232
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor


Omid needs to project the shadow cell column qualifier and then based on the 
value, filter the row. If the shadow cell is not found, it needs to perform a 
lookup in the commit table (the source of truth) to get the information 
instead. For the Phoenix integration, there are likely two TAL methods that can 
be added to handle this:
# Add method call to new TAL method in preScannerOpen call on coprocessor that 
projects the shadow cell qualifiers and sets the time range. This is equivalent 
to the TransactionProcessor.preScannerOpen that Tephra does. It's possible this 
work could be done on the client side as well, but it's more likely that the 
stuff that Phoenix does may override this (but we could get it to work if need 
be).
# Add TAL method that returns a RegionScanner to abstract out the filtering of 
the row (potentially querying commit table). This RegionScanner would be added 
as the first in the chain in the 
NonAggregateRegionScannerFactory.getRegionScanner() API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4214) Scans which write should not block region split or close

2017-09-25 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179470#comment-16179470
 ] 

James Taylor commented on PHOENIX-4214:
---

Ping [~lhofhansl]. Did you +1 this one above?

> Scans which write should not block region split or close
> 
>
> Key: PHOENIX-4214
> URL: https://issues.apache.org/jira/browse/PHOENIX-4214
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: PHOENIX-4214.master.v1.patch, 
> splitDuringUpsertSelect_wip.patch
>
>
> PHOENIX-3111 introduced a scan reference counter which is checked during 
> region preSplit and preClose.  However, a steady stream of UPSERT SELECT or 
> DELETE can keep the count above 0 indefinitely, preventing or greatly 
> delaying a region split or close.
> We should try to avoid starvation of the split / close request, and 
> fail/reject queries where appropriate.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4231) Support restriction of remote UDF load sources

2017-09-25 Thread Andrew Purtell (JIRA)
Andrew Purtell created PHOENIX-4231:
---

 Summary: Support restriction of remote UDF load sources 
 Key: PHOENIX-4231
 URL: https://issues.apache.org/jira/browse/PHOENIX-4231
 Project: Phoenix
  Issue Type: Improvement
Reporter: Andrew Purtell


When allowUserDefinedFunctions is true, users can load UDFs remotely via a jar 
file from any HDFS filesystem reachable on the network. The setting 
hbase.dynamic.jars.dir can be used to restrict locations for jar loading but is 
only applied to jars loaded from the local filesystem.  We should implement 
support for similar restriction via configuration for jars loaded via hdfs:// 
URIs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-4214) Scans which write should not block region split or close

2017-09-25 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-4214:
-

Assignee: Vincent Poon

> Scans which write should not block region split or close
> 
>
> Key: PHOENIX-4214
> URL: https://issues.apache.org/jira/browse/PHOENIX-4214
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: PHOENIX-4214.master.v1.patch, 
> splitDuringUpsertSelect_wip.patch
>
>
> PHOENIX-3111 introduced a scan reference counter which is checked during 
> region preSplit and preClose.  However, a steady stream of UPSERT SELECT or 
> DELETE can keep the count above 0 indefinitely, preventing or greatly 
> delaying a region split or close.
> We should try to avoid starvation of the split / close request, and 
> fail/reject queries where appropriate.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-672) Add GRANT and REVOKE commands using HBase AccessController

2017-09-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179454#comment-16179454
 ] 

Andrew Purtell commented on PHOENIX-672:


bq. how should we handle cases where namespace mapping is not enabled and the 
user tries to assign permissions to tables? Shall we throw exceptions to say 
that it is not supported 

Sounds good to me

> Add GRANT and REVOKE commands using HBase AccessController
> --
>
> Key: PHOENIX-672
> URL: https://issues.apache.org/jira/browse/PHOENIX-672
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Karan Mehta
>  Labels: gsoc2016, security
>
> In HBase 0.98, cell-level security will be available. Take a look at 
> [this](https://communities.intel.com/community/datastack/blog/2013/10/29/hbase-cell-security)
>  excellent blog post by @apurtell. Once Phoenix works on 0.96, we should add 
> support for security to our SQL grammar.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4230) Write index updates in postBatchMutateIndispensably for transactional tables

2017-09-25 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4230:
-

 Summary: Write index updates in postBatchMutateIndispensably for 
transactional tables
 Key: PHOENIX-4230
 URL: https://issues.apache.org/jira/browse/PHOENIX-4230
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: James Taylor


This change was already made for non transactional tables. We should make the 
same change for transactional tables to prevent RPCs while rows are locked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [ANNOUNCE] New PMC Member: Sergey Soldatov

2017-09-25 Thread James Taylor
Great to have you on the PMC, Sergey. Congrats!

James

On Mon, Sep 25, 2017 at 10:28 AM, Josh Mahonin  wrote:

> Congratulations Sergey!
>
> On Sun, Sep 24, 2017 at 4:05 PM, Ted Yu  wrote:
>
> > Congratulations, Sergey !
> >
> > On Sun, Sep 24, 2017 at 1:00 PM, Josh Elser  wrote:
> >
> >> All,
> >>
> >> The Apache Phoenix PMC has recently voted to extend an invitation to
> >> Sergey to join the PMC in recognition of his continued contributions to
> the
> >> community. We are happy to share that he has accepted this offer.
> >>
> >> Please join me in congratulating Sergey! Congratulations on a
> >> well-deserved invitation.
> >>
> >> - Josh (on behalf of the entire PMC)
> >>
> >
> >
>


[jira] [Commented] (PHOENIX-4219) Index gets out of sync on HBase 1.x

2017-09-25 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179396#comment-16179396
 ] 

James Taylor commented on PHOENIX-4219:
---

Any further insights, [~vincentpoon]? Have you been able to identify the root 
cause? Will you plan to submit a patch?

> Index gets out of sync on HBase 1.x
> ---
>
> Key: PHOENIX-4219
> URL: https://issues.apache.org/jira/browse/PHOENIX-4219
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Vincent Poon
> Attachments: PHOENIX-4219_test.patch
>
>
> When writing batches in parallel with multiple background threads, it seems 
> the index sometimes gets out of sync.  This only happens on the master and 
> 4.x-HBase-1.2.
> The tests pass for 4.x-HBase-0.98
> See the attached test, which writes with 2 background threads with batch size 
> of 100.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [ANNOUNCE] New PMC Member: Sergey Soldatov

2017-09-25 Thread Josh Mahonin
Congratulations Sergey!

On Sun, Sep 24, 2017 at 4:05 PM, Ted Yu  wrote:

> Congratulations, Sergey !
>
> On Sun, Sep 24, 2017 at 1:00 PM, Josh Elser  wrote:
>
>> All,
>>
>> The Apache Phoenix PMC has recently voted to extend an invitation to
>> Sergey to join the PMC in recognition of his continued contributions to the
>> community. We are happy to share that he has accepted this offer.
>>
>> Please join me in congratulating Sergey! Congratulations on a
>> well-deserved invitation.
>>
>> - Josh (on behalf of the entire PMC)
>>
>
>


Re: Phoenix code quality

2017-09-25 Thread Andrew Purtell
Also, in my experience curating the Hadoop ecosystem (a process that starts
with Apache $FOO release x.y.z, then applies individual upstream commits
one patch at a time), Phoenix code is more tightly coupled than any other
project I have encountered, out of: ZooKeeper, Hadoop, HBase, Pig, Hive,
Spark. Most of the time I can cherry pick a single upstream commit onto a
release successfully. Sometimes I need to apply one or two prerequisite
changes first. Those changes tend to apply successfully. This is due in no
small part to how those projects structure their code. By successfully I
mean with only minor fuzz or trivial rejects like import ordering. However,
not so with Phoenix. More often than not the rejects are nontrivial,
especially the integration tests. A change to Phoenix often refactors an
integration test rather than adds or subtracts compilation or test units.
Where cherry picks from other projects typically bring over new tests which
pass, cherry picks from Phoenix usually are a pain to merge and then
numerous tests fail.

I suspect this tendency toward coupling will impact how successful (or not)
a branch merge process might be. Certainly developers working on the
feature branch will suffer more when rebasing on the main branch than in
other projects, like HBase, where this approach has been successful.

I think Kevin Liew's earlier below comment also touched on this concern.


*> Parts of the codebase can be quite intimidating due to the amount of
state that needs to be tracked.*




On Mon, Sep 25, 2017 at 10:10 AM, Andrew Purtell 
wrote:

> A model that has worked well for HBase is large feature development on a
> separate branch, followed by a process to do a branch merge back to the
> main project. The big upside is feature developers have near total freedom.
> The big downside is the merge can take a while to review/approve and
> rebasing the code to land the feature is a lot of last minute work and the
> merge can temporarily destabilize both the feature and the main project. On
> balance, the freedom to do independent feature development without
> impacting the main project makes it worth it, IMHO.
>
>
>
> On Mon, Sep 25, 2017 at 9:30 AM, Jan Fernando 
> wrote:
>
>> Lars,
>>
>> I think these are really awesome guidelines. As Phoenix reaches a new
>> phase
>> of maturity and operational complexity (which is really exciting and
>> important for the project IMHO), I think these things are becoming even
>> more important.
>>
>> Re #5, I agree we need to err on the side of stability. I agree if
>> features
>> are there in main and documented people will use them. However, right now
>> it's hard for users of the Phoenix to discern which features are mature
>> versus which features may still need hardening at scale. I think it might
>> help to actually come up with a more standardized process for developing
>> "beta" or new high impact features. Perhaps we can follow what other
>> projects like HBase do. For example: Should big changes be in their own
>> branch? In some cases we have done things like this in Phoenix (e.g.
>> Calcite) and in others we have not (e.g. transaction support).  I think
>> consistency would be really helpful.
>>
>> So, what should the guidelines be on when to create a new branch and  when
>> to merge into the main branch? Is this a good model? I think getting input
>> from HBase committers on this thread on what has worked and what hasn't
>> would be great so we don't reinvent the wheel.
>>
>> I think something like this could help ensure that  is main is stable and
>> always ready for prime time and make it easier for developers to discern
>> which are "beta" features that they can use at their discretion.
>>
>> Thanks,
>> --Jan
>>
>> On Sat, Sep 23, 2017 at 8:58 AM, Nick Dimiduk  wrote:
>>
>> > Lars,
>> >
>> > This is a great list of guidelines. We should publish it on the
>> > contributing [0] section of the public site.
>> >
>> > -n
>> >
>> > [0]: http://phoenix.apache.org/contributing.html
>> >
>> > On Fri, Sep 22, 2017 at 4:12 PM lars hofhansl  wrote:
>> >
>> > > Any comments?Is this simply not a concern?
>> > > -- Lars
>> > >   From: lars hofhansl 
>> > >  To: Dev 
>> > >  Sent: Wednesday, September 13, 2017 10:22 AM
>> > >  Subject: Fw: Phoenix code quality
>> > >
>> > > Hi all Phoenix developers,
>> > > here's a thread that I had started on the private PMC list, and we
>> agreed
>> > > to have this as a public discussion.
>> > >
>> > >
>> > > I'd like to solicit feedback on the 6 steps/recommendations below and
>> > > about we can ingrain those into the development process.
>> > > Comments, concerns, are - as always - welcome!
>> > > -- Lars
>> > > - Forwarded Message -
>> > >  From: lars hofhansl 
>> > >  To: Private 
>> > >  Sent: Tuesday, September 5, 2017 9:59 PM
>> > >  

[jira] [Created] (PHOENIX-4229) Parent-Child linking rows in System.Catalog break tenant view replication

2017-09-25 Thread Geoffrey Jacoby (JIRA)
Geoffrey Jacoby created PHOENIX-4229:


 Summary: Parent-Child linking rows in System.Catalog break tenant 
view replication
 Key: PHOENIX-4229
 URL: https://issues.apache.org/jira/browse/PHOENIX-4229
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.11.0, 4.12.0
Reporter: Geoffrey Jacoby


PHOENIX-2051 introduced new Parent-Child linking rows to System.Catalog that 
speed up view deletion. Unfortunately, this breaks assumptions in PHOENIX-3639, 
which gives a way to replicate tenant views from one cluster to another. (It 
assumes that all the metadata for a tenant view is owned by the tenant -- the 
linking rows are not.) 

PHOENIX-3639 was a workaround in the first place to the more fundamental design 
problem that Phoenix places the metadata for both table schemas -- which should 
never be replicated -- in the same table and column family as the metadata for 
tenant views, which should be replicated. 

Note that the linking rows also make it more difficult to ever split these two 
datasets apart, as proposed in PHOENIX-3520.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Phoenix code quality

2017-09-25 Thread Andrew Purtell
A model that has worked well for HBase is large feature development on a
separate branch, followed by a process to do a branch merge back to the
main project. The big upside is feature developers have near total freedom.
The big downside is the merge can take a while to review/approve and
rebasing the code to land the feature is a lot of last minute work and the
merge can temporarily destabilize both the feature and the main project. On
balance, the freedom to do independent feature development without
impacting the main project makes it worth it, IMHO.



On Mon, Sep 25, 2017 at 9:30 AM, Jan Fernando 
wrote:

> Lars,
>
> I think these are really awesome guidelines. As Phoenix reaches a new phase
> of maturity and operational complexity (which is really exciting and
> important for the project IMHO), I think these things are becoming even
> more important.
>
> Re #5, I agree we need to err on the side of stability. I agree if features
> are there in main and documented people will use them. However, right now
> it's hard for users of the Phoenix to discern which features are mature
> versus which features may still need hardening at scale. I think it might
> help to actually come up with a more standardized process for developing
> "beta" or new high impact features. Perhaps we can follow what other
> projects like HBase do. For example: Should big changes be in their own
> branch? In some cases we have done things like this in Phoenix (e.g.
> Calcite) and in others we have not (e.g. transaction support).  I think
> consistency would be really helpful.
>
> So, what should the guidelines be on when to create a new branch and  when
> to merge into the main branch? Is this a good model? I think getting input
> from HBase committers on this thread on what has worked and what hasn't
> would be great so we don't reinvent the wheel.
>
> I think something like this could help ensure that  is main is stable and
> always ready for prime time and make it easier for developers to discern
> which are "beta" features that they can use at their discretion.
>
> Thanks,
> --Jan
>
> On Sat, Sep 23, 2017 at 8:58 AM, Nick Dimiduk  wrote:
>
> > Lars,
> >
> > This is a great list of guidelines. We should publish it on the
> > contributing [0] section of the public site.
> >
> > -n
> >
> > [0]: http://phoenix.apache.org/contributing.html
> >
> > On Fri, Sep 22, 2017 at 4:12 PM lars hofhansl  wrote:
> >
> > > Any comments?Is this simply not a concern?
> > > -- Lars
> > >   From: lars hofhansl 
> > >  To: Dev 
> > >  Sent: Wednesday, September 13, 2017 10:22 AM
> > >  Subject: Fw: Phoenix code quality
> > >
> > > Hi all Phoenix developers,
> > > here's a thread that I had started on the private PMC list, and we
> agreed
> > > to have this as a public discussion.
> > >
> > >
> > > I'd like to solicit feedback on the 6 steps/recommendations below and
> > > about we can ingrain those into the development process.
> > > Comments, concerns, are - as always - welcome!
> > > -- Lars
> > > - Forwarded Message -
> > >  From: lars hofhansl 
> > >  To: Private 
> > >  Sent: Tuesday, September 5, 2017 9:59 PM
> > >  Subject: Phoenix code quality
> > >
> > > Hi all,
> > > I realize this might be a difficult topic, and let me prefix this by
> > > saying that this is my opinion only.
> > > Phoenix is coming to a point where big organizations are relying on it.
> > > At Salesforce we do billions of Phoenix queries per day... And we had a
> > > bunch of recent production issues - only in part caused by Phoenix.
> > >
> > > If there was a patch here and there that lacks quality, tests,
> comments,
> > > or proper documentation, then it's the fault of the person who created
> > the
> > > patch.
> > > If, however, this happens with some frequency, then it a problem that
> > > should involve PMC and committers who review and commit the patches in
> > > question.
> > > I'd like to suggest the following:
> > > 1. Comments in the code should be considered when judging a patch for
> its
> > > merit. No need to go overboard, but there should be enough comments so
> > that
> > > someone new the code can get an idea about what this code is doing.
> > > 2. Eyeball each patch for how it would scale. Will it all work on 1000
> > > machines? With 1bn rows? With 1000 indexes? etc, etc.If it's not
> obvious,
> > > ask the creator of the patch. Agree on what the scaling goals should
> > > be.(For anything that works only for a few million rows or on a dozen
> > > machines, nobody in their right mind would accept the complexity of
> > running
> > > Phoenix - and HBase, HDFS, ZK, etc - folks would and should simply use
> > > Postgres.)
> > > 3. Check how a patch will behave under failure. Machines failures are
> > > common. Regions may not reachable for a bit, etc. Are there good
> > timeouts?
> > > 

Re: Phoenix code quality

2017-09-25 Thread Jan Fernando
Lars,

I think these are really awesome guidelines. As Phoenix reaches a new phase
of maturity and operational complexity (which is really exciting and
important for the project IMHO), I think these things are becoming even
more important.

Re #5, I agree we need to err on the side of stability. I agree if features
are there in main and documented people will use them. However, right now
it's hard for users of the Phoenix to discern which features are mature
versus which features may still need hardening at scale. I think it might
help to actually come up with a more standardized process for developing
"beta" or new high impact features. Perhaps we can follow what other
projects like HBase do. For example: Should big changes be in their own
branch? In some cases we have done things like this in Phoenix (e.g.
Calcite) and in others we have not (e.g. transaction support).  I think
consistency would be really helpful.

So, what should the guidelines be on when to create a new branch and  when
to merge into the main branch? Is this a good model? I think getting input
from HBase committers on this thread on what has worked and what hasn't
would be great so we don't reinvent the wheel.

I think something like this could help ensure that  is main is stable and
always ready for prime time and make it easier for developers to discern
which are "beta" features that they can use at their discretion.

Thanks,
--Jan

On Sat, Sep 23, 2017 at 8:58 AM, Nick Dimiduk  wrote:

> Lars,
>
> This is a great list of guidelines. We should publish it on the
> contributing [0] section of the public site.
>
> -n
>
> [0]: http://phoenix.apache.org/contributing.html
>
> On Fri, Sep 22, 2017 at 4:12 PM lars hofhansl  wrote:
>
> > Any comments?Is this simply not a concern?
> > -- Lars
> >   From: lars hofhansl 
> >  To: Dev 
> >  Sent: Wednesday, September 13, 2017 10:22 AM
> >  Subject: Fw: Phoenix code quality
> >
> > Hi all Phoenix developers,
> > here's a thread that I had started on the private PMC list, and we agreed
> > to have this as a public discussion.
> >
> >
> > I'd like to solicit feedback on the 6 steps/recommendations below and
> > about we can ingrain those into the development process.
> > Comments, concerns, are - as always - welcome!
> > -- Lars
> > - Forwarded Message -
> >  From: lars hofhansl 
> >  To: Private 
> >  Sent: Tuesday, September 5, 2017 9:59 PM
> >  Subject: Phoenix code quality
> >
> > Hi all,
> > I realize this might be a difficult topic, and let me prefix this by
> > saying that this is my opinion only.
> > Phoenix is coming to a point where big organizations are relying on it.
> > At Salesforce we do billions of Phoenix queries per day... And we had a
> > bunch of recent production issues - only in part caused by Phoenix.
> >
> > If there was a patch here and there that lacks quality, tests, comments,
> > or proper documentation, then it's the fault of the person who created
> the
> > patch.
> > If, however, this happens with some frequency, then it a problem that
> > should involve PMC and committers who review and commit the patches in
> > question.
> > I'd like to suggest the following:
> > 1. Comments in the code should be considered when judging a patch for its
> > merit. No need to go overboard, but there should be enough comments so
> that
> > someone new the code can get an idea about what this code is doing.
> > 2. Eyeball each patch for how it would scale. Will it all work on 1000
> > machines? With 1bn rows? With 1000 indexes? etc, etc.If it's not obvious,
> > ask the creator of the patch. Agree on what the scaling goals should
> > be.(For anything that works only for a few million rows or on a dozen
> > machines, nobody in their right mind would accept the complexity of
> running
> > Phoenix - and HBase, HDFS, ZK, etc - folks would and should simply use
> > Postgres.)
> > 3. Check how a patch will behave under failure. Machines failures are
> > common. Regions may not reachable for a bit, etc. Are there good
> timeouts?
> > Everything should gracefully continue to work.
> >
> > 4. Double check that tests check for corner conditions.
> > 5. Err on the side of stability, rather than committing a patch as beta.
> > If it's in the code, people _will_ use it.
> > 6. Are all config options properly explained and make sense? It's better
> > to err on the side of fewer config options.
> >
> > 7. Probably more stuff...
> >
> > Again. Just MHO. Many of these things are already done. But I still
> > thought might be good to have a quick discussion around this.
> >
> > Comments?
> > Thanks.
> > -- Lars
> >
> >
> >
> >
> >
>


Re: Phoenix code quality

2017-09-25 Thread Cody Marcel
@Kevin Liew, that's a good idea. If you see places where this can be done,
file a JIRA for it as this is great way to make incremental improvements.
Well defined small things like this are also make for simple things for
noobs to pick up and contribute.

I like following the boy  scout rule. Leave the code better than you found.
Following this not only has the advantage of cleaning up code in general,
but it's targeting the improvements on the code paths that people use. It's
not time waste on random cleanup.

http://programmer.97things.oreilly.com/wiki/index.php/The_Boy_Scout_Rule




On Fri, Sep 22, 2017 at 6:25 PM, Kevin Liew  wrote:

> Parts of the codebase can be quite intimidating due to the amount of state
> that needs to be tracked. In future patches, there could be an attempt to
> take cues from functional programming styles and decompose larger functions
> into "pure functions". This would make the project more accessible to new
> developers and make it easier to add test coverage through unit testing.
>
> Kevin
>
> On Fri, Sep 22, 2017 at 4:12 PM lars hofhansl  wrote:
>
> > Any comments?Is this simply not a concern?
> > -- Lars
> >   From: lars hofhansl 
> >  To: Dev 
> >  Sent: Wednesday, September 13, 2017 10:22 AM
> >  Subject: Fw: Phoenix code quality
> >
> > Hi all Phoenix developers,
> > here's a thread that I had started on the private PMC list, and we agreed
> > to have this as a public discussion.
> >
> >
> > I'd like to solicit feedback on the 6 steps/recommendations below and
> > about we can ingrain those into the development process.
> > Comments, concerns, are - as always - welcome!
> > -- Lars
> > - Forwarded Message -
> >  From: lars hofhansl 
> >  To: Private 
> >  Sent: Tuesday, September 5, 2017 9:59 PM
> >  Subject: Phoenix code quality
> >
> > Hi all,
> > I realize this might be a difficult topic, and let me prefix this by
> > saying that this is my opinion only.
> > Phoenix is coming to a point where big organizations are relying on it.
> > At Salesforce we do billions of Phoenix queries per day... And we had a
> > bunch of recent production issues - only in part caused by Phoenix.
> >
> > If there was a patch here and there that lacks quality, tests, comments,
> > or proper documentation, then it's the fault of the person who created
> the
> > patch.
> > If, however, this happens with some frequency, then it a problem that
> > should involve PMC and committers who review and commit the patches in
> > question.
> > I'd like to suggest the following:
> > 1. Comments in the code should be considered when judging a patch for its
> > merit. No need to go overboard, but there should be enough comments so
> that
> > someone new the code can get an idea about what this code is doing.
> > 2. Eyeball each patch for how it would scale. Will it all work on 1000
> > machines? With 1bn rows? With 1000 indexes? etc, etc.If it's not obvious,
> > ask the creator of the patch. Agree on what the scaling goals should
> > be.(For anything that works only for a few million rows or on a dozen
> > machines, nobody in their right mind would accept the complexity of
> running
> > Phoenix - and HBase, HDFS, ZK, etc - folks would and should simply use
> > Postgres.)
> > 3. Check how a patch will behave under failure. Machines failures are
> > common. Regions may not reachable for a bit, etc. Are there good
> timeouts?
> > Everything should gracefully continue to work.
> >
> > 4. Double check that tests check for corner conditions.
> > 5. Err on the side of stability, rather than committing a patch as beta.
> > If it's in the code, people _will_ use it.
> > 6. Are all config options properly explained and make sense? It's better
> > to err on the side of fewer config options.
> >
> > 7. Probably more stuff...
> >
> > Again. Just MHO. Many of these things are already done. But I still
> > thought might be good to have a quick discussion around this.
> >
> > Comments?
> > Thanks.
> > -- Lars
> >
> >
> >
> >
> >
>


[jira] [Commented] (PHOENIX-4198) Remove the need for users to have access to the Phoenix SYSTEM tables to create tables

2017-09-25 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16179140#comment-16179140
 ] 

Ankit Singhal commented on PHOENIX-4198:


bq. Since we already acquire the required locks before calling HTable.Batch 
would that be atomic as well ?
Actually, still it will not be atomic, I changed the implementation to use 
MultiRowMutationEndpoint instead of HTable.batch to fix it.(Thanks [~devaraj] 
for the pointer)

bq.If a user has create access on the namespace can he create a view, or does 
he also need read access on the data table to create the view?
 User still needs read access on table.

bq.I think for creating an index a user should have create access for the 
schema/namespace. While creating an index users that have read access on the 
data table should also be granted read access on the index. Users that have 
write access on the table should be granted write access on the index, and for 
mutable indexes they should be given execute access on the data table (so that 
the index metadata can be sent to the server).
Yep, that is already covered under PhoenixAccessController#grantAccessToUsers. 
(this will be an automatic grant which can be turned off if suspected as 
security issue)

I'm done with the fixes (including above issues mentioned by Josh and Sergey). 
(Busy with day's job but will upload v3 soon.)

> Remove the need for users to have access to the Phoenix SYSTEM tables to 
> create tables
> --
>
> Key: PHOENIX-4198
> URL: https://issues.apache.org/jira/browse/PHOENIX-4198
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>  Labels: namespaces
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4198.patch, PHOENIX-4198_v2.patch
>
>
> Problem statement:-
> A user who doesn't have access to a table should also not be able to modify  
> Phoenix Metadata. Currently, every user required to have a write permission 
> to SYSTEM tables which is a security concern as they can 
> create/alter/drop/corrupt meta data of any other table without proper access 
> to the corresponding physical tables.
> [~devaraj] recommended a solution as below.
> 1. A coprocessor endpoint would be implemented and all write accesses to the 
> catalog table would have to necessarily go through that. The 'hbase' user 
> would own that table. Today, there is MetaDataEndpointImpl that's run on the 
> RS where the catalog is hosted, and that could be enhanced to serve the 
> purpose we need.
> 2. The regionserver hosting the catalog table would do the needful for all 
> catalog updates - creating the mutations as needed, that is.
> 3. The coprocessor endpoint could use Ranger to do necessary authorization 
> checks before updating the catalog table. So for example, if a user doesn't 
> have authorization to create a table in a certain namespace, or update the 
> schema, etc., it can reject such requests outright. Only after successful 
> validations, does it perform the operations (physical operations to do with 
> creating the table, and updating the catalog table with the necessary 
> mutations).
> 4. In essence, the code that implements dealing with DDLs, would be hosted in 
> the catalog table endpoint. The client code would be really thin, and it 
> would just invoke the endpoint with the necessary info. The additional thing 
> that needs to be done in the endpoint is the validation of authorization to 
> prevent unauthorized users from making changes to someone else's 
> tables/schemas/etc. For example, one should be able to create a view on a 
> table if he has read access on the base table. That mutation on the catalog 
> table would be permitted. For changing the schema (adding a new column for 
> example), the said user would need write permission on the table... etc etc.
> Thanks [~elserj] for the write-up.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)