[jira] [Updated] (PHOENIX-2582) Prevent need of catch up query when creating non transactional index

2019-05-15 Thread Kadir OZDEMIR (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-2582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kadir OZDEMIR updated PHOENIX-2582:
---
Comment: was deleted

(was: [~giskender] just pointed out this Jira to me. [~tdsilva], 
[~vincentpoon], [~lhofhansl], [~apurtell], as you may know, she is going to 
implement the remaining changes for immutable indexes for the new index design 
(PHOENIX-5156). Immutable indexes are currently implemented on the client side. 
We were planning to follow this decision for the new design too. Given this 
problem, given that mutable indexes are implemented on the server side, and 
given that immutable indexes do not require any row locking (for read 
consistency and concurrent updates) and therefore do not pose deadlock issues, 
should not we implement them on the server side too to address the issue of 
this Jira? Implementing them on the server side will be much easier. )

> Prevent need of catch up query when creating non transactional index
> 
>
> Key: PHOENIX-2582
> URL: https://issues.apache.org/jira/browse/PHOENIX-2582
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Thomas D'Silva
>Priority: Major
>
> If we create an index while we are upserting rows to the table its possible 
> we can miss writing corresponding rows to the index table. 
> If a region server is writing a batch of rows and we create an index just 
> before the batch is written we will miss writing that batch to the index 
> table. This is because we run the inital UPSERT SELECT to populate the index 
> with an SCN that we get from the server which will be before the timestamp 
> the batch of rows is written. 
> We need to figure out if there is a way to determine that are pending batches 
> have been written before running the UPSERT SELECT to do the initial index 
> population.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5283) Add CASCADE ALL in the SQL Grammar of ALTER TABLE ADD

2019-05-15 Thread Swaroopa Kadam (JIRA)
Swaroopa Kadam created PHOENIX-5283:
---

 Summary: Add CASCADE ALL in the SQL Grammar of ALTER TABLE ADD 
 Key: PHOENIX-5283
 URL: https://issues.apache.org/jira/browse/PHOENIX-5283
 Project: Phoenix
  Issue Type: Improvement
Reporter: Swaroopa Kadam
Assignee: Swaroopa Kadam


Include following support in the grammar. 

ALTER TABLE ADD CASCADE <(comma separated list of indexes) | ALL > IF NOT 
EXISTS  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5282) "--debug" option for Sqlline

2019-05-15 Thread Karthik Palanisamy (JIRA)
Karthik Palanisamy created PHOENIX-5282:
---

 Summary: "--debug" option for Sqlline
 Key: PHOENIX-5282
 URL: https://issues.apache.org/jira/browse/PHOENIX-5282
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 5.0.0, 4.14.0, 4.7.0
Reporter: Karthik Palanisamy


This provides a simple flag "–debug" or "-d" for enabling client-side debug log 
in Sqlline/Sqlline-thin console.  No changes required from log4j property.
{code:java}
sqlline.py --help
usage: sqlline.py [-h] [-v VERBOSE] [-c COLOR] [-fc FASTCONNECT] [-d]
[zookeepers] [sqlfile]

...

optional arguments:

...

-d, --debug           Enable debug logger.{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4845) Support using Row Value Constructors in OFFSET clause for paging in tables where the sort order of PK columns varies

2019-05-15 Thread Daniel Wong (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Wong updated PHOENIX-4845:
-
Summary: Support using Row Value Constructors in OFFSET clause for paging 
in tables where the sort order of PK columns varies  (was: Support using Row 
Value Constructors in OFFSET clause to support paging in tables where the sort 
order of PK columns varies)

> Support using Row Value Constructors in OFFSET clause for paging in tables 
> where the sort order of PK columns varies
> 
>
> Key: PHOENIX-4845
> URL: https://issues.apache.org/jira/browse/PHOENIX-4845
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Thomas D'Silva
>Assignee: Daniel Wong
>Priority: Major
>  Labels: DESC, SFDC
> Attachments: PHOENIX-offset.txt
>
>
> RVCs along with the LIMIT clause are useful for efficiently paging through 
> rows (see [http://phoenix.apache.org/paged.html]). This works well if the pk 
> columns are sorted ascending, we can always use the > operator to query for 
> the next batch of row. 
> However if the PK of a table is (A  DESC, B DESC) we cannot use the following 
> query to page through the data
> {code:java}
> SELECT * FROM TABLE WHERE (A, B) > (?, ?) ORDER BY A DESC, B DESC LIMIT 20
> {code}
> Since the rows are sorted by A desc and then by B descending we need change 
> the comparison order
> {code:java}
> SELECT * FROM TABLE WHERE (A, B) < (?, ?) ORDER BY A DESC, B DESC LIMIT 20
> {code}
> If the PK of a table contains columns with mixed sort order for eg (A  DESC, 
> B) then we cannot use RVC to page through data. 
> If we supported using RVCs in the offset clause we could use the offset to 
> set the start row of the scan. Clients would not have to have logic to 
> determine the comparison operator. This would also support paging through 
> data for tables where the PK columns are sorted in mixed order. 
> {code:java}
> SELECT * FROM TABLE ORDER BY A DESC, B LIMIT 20 OFFSET (?,?)
> {code}
> We would only allow using the offset if the rows are ordered by the sort 
> order of the PK columns.
>  
> FYI [~jfernando_sfdc]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5231) Configurable Stats Cache

2019-05-15 Thread Daniel Wong (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Wong updated PHOENIX-5231:
-
Attachment: (was: PHOENIX-5231.master.v3.patch)

> Configurable Stats Cache
> 
>
> Key: PHOENIX-5231
> URL: https://issues.apache.org/jira/browse/PHOENIX-5231
> Project: Phoenix
>  Issue Type: Test
>Reporter: Daniel Wong
>Assignee: Daniel Wong
>Priority: Major
> Attachments: PHOENIX-5231.4.x-HBase-1.3.patch, 
> PHOENIX-5231.4.x-HBase-1.3.v2.patch, PHOENIX-5231.4.x-HBase-1.3.v3.patch, 
> PHOENIX-5231.master.v3.patch, PHOENIX-5231.master.v4.patch
>
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> Currently, the phoenix stats cache is per 
> ConnectionQuerySerivce/ConnectionProfile, which leads to duplicated cached 
> entry (the guideposts) and waste resources if these separate connections are 
> querying the same underlying table. It would be good to be able to provide a 
> configurable stats cache as control the cache level so it could be per JVM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5231) Configurable Stats Cache

2019-05-15 Thread Daniel Wong (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Wong updated PHOENIX-5231:
-
Attachment: PHOENIX-5231.master.v4.patch

> Configurable Stats Cache
> 
>
> Key: PHOENIX-5231
> URL: https://issues.apache.org/jira/browse/PHOENIX-5231
> Project: Phoenix
>  Issue Type: Test
>Reporter: Daniel Wong
>Assignee: Daniel Wong
>Priority: Major
> Attachments: PHOENIX-5231.4.x-HBase-1.3.patch, 
> PHOENIX-5231.4.x-HBase-1.3.v2.patch, PHOENIX-5231.4.x-HBase-1.3.v3.patch, 
> PHOENIX-5231.master.v3.patch, PHOENIX-5231.master.v4.patch
>
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> Currently, the phoenix stats cache is per 
> ConnectionQuerySerivce/ConnectionProfile, which leads to duplicated cached 
> entry (the guideposts) and waste resources if these separate connections are 
> querying the same underlying table. It would be good to be able to provide a 
> configurable stats cache as control the cache level so it could be per JVM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5231) Configurable Stats Cache

2019-05-15 Thread Daniel Wong (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Wong updated PHOENIX-5231:
-
Attachment: PHOENIX-5231.4.x-HBase-1.3.v3.patch
PHOENIX-5231.master.v3.patch

> Configurable Stats Cache
> 
>
> Key: PHOENIX-5231
> URL: https://issues.apache.org/jira/browse/PHOENIX-5231
> Project: Phoenix
>  Issue Type: Test
>Reporter: Daniel Wong
>Assignee: Daniel Wong
>Priority: Major
> Attachments: PHOENIX-5231.4.x-HBase-1.3.patch, 
> PHOENIX-5231.4.x-HBase-1.3.v2.patch, PHOENIX-5231.4.x-HBase-1.3.v3.patch, 
> PHOENIX-5231.master.v3.patch, PHOENIX-5231.master.v3.patch
>
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> Currently, the phoenix stats cache is per 
> ConnectionQuerySerivce/ConnectionProfile, which leads to duplicated cached 
> entry (the guideposts) and waste resources if these separate connections are 
> querying the same underlying table. It would be good to be able to provide a 
> configurable stats cache as control the cache level so it could be per JVM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4296) Dead loop in HBase reverse scan when amount of scan data is greater than SCAN_RESULT_CHUNK_SIZE

2019-05-15 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva reassigned PHOENIX-4296:
---

Assignee: Chen Feng

> Dead loop in HBase reverse scan when amount of scan data is greater than 
> SCAN_RESULT_CHUNK_SIZE
> ---
>
> Key: PHOENIX-4296
> URL: https://issues.apache.org/jira/browse/PHOENIX-4296
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: rukawakang
>Assignee: Chen Feng
>Priority: Major
> Fix For: 4.14.2
>
> Attachments: PHOENIX-4296-4.x-HBase-1.2-v2.patch, 
> PHOENIX-4296-4.x-HBase-1.2-v3.patch, PHOENIX-4296-4.x-HBase-1.2.patch, 
> PHOENIX-4296.patch
>
>
> This problem seems to only occur with reverse scan not forward scan. When 
> amount of scan data is greater than SCAN_RESULT_CHUNK_SIZE(default 2999), 
> Class ChunkedResultIteratorFactory will multiple calls function 
> getResultIterator. But in function getResultIterator it always readjusts 
> startRow, in fact, if in reverse scan we should readjust stopRow. For example 
> {code:java}
> if (ScanUtil.isReversed(scan)) {
> scan.setStopRow(ByteUtil.copyKeyBytesIfNecessary(lastKey));
> } else {
> scan.setStartRow(ByteUtil.copyKeyBytesIfNecessary(lastKey));
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5275) Remove accidental imports from curator-client-2.12.0

2019-05-15 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-5275:

Fix Version/s: (was: 4.14.2)
   5.1.0

> Remove accidental imports from curator-client-2.12.0
> 
>
> Key: PHOENIX-5275
> URL: https://issues.apache.org/jira/browse/PHOENIX-5275
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jacob Isaac
>Priority: Minor
> Fix For: 4.15.0, 5.1.0
>
>
> The following imports 
> import org.apache.curator.shaded.com.google.common.*
> were accidentally introduced in
> phoenix-core/src/test/java/org/apache/phoenix/query/QueryServicesTestImpl.java
> phoenix-core/src/it/java/org/apache/phoenix/end2end/UpgradeIT.java
> phoenix-core/src/test/java/org/apache/phoenix/compile/WhereOptimizerTest.java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)