[jira] [Assigned] (PHOENIX-5358) Metrics for the GlobalIndexChecker coprocessor

2019-07-08 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-5358:
---

Assignee: Priyank Porwal  (was: Swaroopa Kadam)

> Metrics for the GlobalIndexChecker coprocessor
> --
>
> Key: PHOENIX-5358
> URL: https://issues.apache.org/jira/browse/PHOENIX-5358
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.1, 5.0.0, 4.15.0, 4.14.2
>Reporter: Kadir OZDEMIR
>Assignee: Priyank Porwal
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.3
>
>
> The GlobalIndexChecker coprocessor is responsible for checking if an index 
> row is "verified", that is, has completed its most recent two-phase write 
> operation, during scans.  If the row is not verified then the coprocessor 
> rebuilds the row using a read-repair technique. The read-repair operations 
> should be rare but add extra latency on the scan operations. We need to know 
> how many read-repair operations happen and how long they take. Thus, we need 
> metrics on them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5385) View index physical table is treated as PTableType.TABLE; does not load IndexCheckerCoproc

2019-07-05 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5385:

Fix Version/s: 4.14.3
   5.1.0
   4.15.0

> View index physical table is treated as PTableType.TABLE; does not load 
> IndexCheckerCoproc
> --
>
> Key: PHOENIX-5385
> URL: https://issues.apache.org/jira/browse/PHOENIX-5385
> Project: Phoenix
>  Issue Type: Bug
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.3
>
> Attachments: PHOENIX-5385.4.x-hbase-1.3.v1.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As found during the dev of IndexUpgradeTool, ensureViewIndexesTableCreated 
> considers view index table as type TABLE. It should consider it as INDEX, so 
> that it loads new IndexCheckerCorpocessor on it. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5385) View index physical table is treated as PTableType.TABLE; does not load IndexCheckerCoproc

2019-07-05 Thread Swaroopa Kadam (JIRA)
Swaroopa Kadam created PHOENIX-5385:
---

 Summary: View index physical table is treated as PTableType.TABLE; 
does not load IndexCheckerCoproc
 Key: PHOENIX-5385
 URL: https://issues.apache.org/jira/browse/PHOENIX-5385
 Project: Phoenix
  Issue Type: Bug
Reporter: Swaroopa Kadam
Assignee: Swaroopa Kadam


As found during the dev of IndexUpgradeTool, ensureViewIndexesTableCreated 
considers view index table as type TABLE. It should consider it as INDEX, so 
that it loads new IndexCheckerCorpocessor on it. 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5358) Metrics for the GlobalIndexChecker coprocessor

2019-06-26 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-5358:
---

Assignee: Swaroopa Kadam

> Metrics for the GlobalIndexChecker coprocessor
> --
>
> Key: PHOENIX-5358
> URL: https://issues.apache.org/jira/browse/PHOENIX-5358
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.1, 5.0.0, 4.15.0, 4.14.2
>Reporter: Kadir OZDEMIR
>    Assignee: Swaroopa Kadam
>Priority: Major
>
> The GlobalIndexChecker coprocessor is responsible for checking if an index 
> row is "verified", that is, has completed its most recent two-phase write 
> operation, during scans.  If the row is not verified then the coprocessor 
> rebuilds the row using a read-repair technique. The read-repair operations 
> should be rare but add extra latency on the scan operations. We need to know 
> how many read-repair operations happen and how long they take. Thus, we need 
> metrics on them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5359) Remove (Global-Indexing)new coprocessors in CQSI#addCoprocessors with flag(INDEX_REGION_OBSERVER_ENABLED_ATTRIB) disabled

2019-06-21 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5359:

Attachment: (was: PHOENIX-5359.4.x-hbase-1.3.v1.patch)

> Remove (Global-Indexing)new coprocessors in CQSI#addCoprocessors with 
> flag(INDEX_REGION_OBSERVER_ENABLED_ATTRIB) disabled 
> --
>
> Key: PHOENIX-5359
> URL: https://issues.apache.org/jira/browse/PHOENIX-5359
> Project: Phoenix
>  Issue Type: Bug
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Minor
> Attachments: PHOENIX-5359.4.x-HBase-1.3.v1.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5359) Remove (Global-Indexing)new coprocessors in CQSI#addCoprocessors with flag(INDEX_REGION_OBSERVER_ENABLED_ATTRIB) disabled

2019-06-21 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5359:

Attachment: (was: PHOENIX-5359.master.v1.patch)

> Remove (Global-Indexing)new coprocessors in CQSI#addCoprocessors with 
> flag(INDEX_REGION_OBSERVER_ENABLED_ATTRIB) disabled 
> --
>
> Key: PHOENIX-5359
> URL: https://issues.apache.org/jira/browse/PHOENIX-5359
> Project: Phoenix
>  Issue Type: Bug
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Minor
> Attachments: PHOENIX-5359.4.x-HBase-1.3.v1.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5333) A tool to upgrade existing tables/indexes to use self-consistent global indexes design

2019-06-21 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5333:

Attachment: (was: PHOENIX-5333.master.v1.patch)

> A tool to upgrade existing tables/indexes to use self-consistent global 
> indexes design
> --
>
> Key: PHOENIX-5333
> URL: https://issues.apache.org/jira/browse/PHOENIX-5333
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Major
> Attachments: PHOENIX-5333.master.v1.patch
>
>  Time Spent: 7h 50m
>  Remaining Estimate: 0h
>
> A tool to upgrade existing tables/indexes to use self-consistent global 
> indexes design in PHOENIX-5156



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5333) A tool to upgrade existing tables/indexes to use self-consistent global indexes design

2019-06-20 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5333:

Attachment: PHOENIX-5333.master.v1.patch

> A tool to upgrade existing tables/indexes to use self-consistent global 
> indexes design
> --
>
> Key: PHOENIX-5333
> URL: https://issues.apache.org/jira/browse/PHOENIX-5333
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Major
> Attachments: PHOENIX-5333.master.v1.patch
>
>  Time Spent: 7.5h
>  Remaining Estimate: 0h
>
> A tool to upgrade existing tables/indexes to use self-consistent global 
> indexes design in PHOENIX-5156



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5333) A tool to upgrade existing tables/indexes to use self-consistent global indexes design

2019-06-20 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-5333:
---

Assignee: Swaroopa Kadam

> A tool to upgrade existing tables/indexes to use self-consistent global 
> indexes design
> --
>
> Key: PHOENIX-5333
> URL: https://issues.apache.org/jira/browse/PHOENIX-5333
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Major
>  Time Spent: 7.5h
>  Remaining Estimate: 0h
>
> A tool to upgrade existing tables/indexes to use self-consistent global 
> indexes design in PHOENIX-5156



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5359) Remove (Global-Indexing)new coprocessors in CQSI#addCoprocessors with flag(INDEX_REGION_OBSERVER_ENABLED_ATTRIB) disabled

2019-06-20 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5359:

Attachment: PHOENIX-5359.4.x-hbase-1.3.v1.patch

> Remove (Global-Indexing)new coprocessors in CQSI#addCoprocessors with 
> flag(INDEX_REGION_OBSERVER_ENABLED_ATTRIB) disabled 
> --
>
> Key: PHOENIX-5359
> URL: https://issues.apache.org/jira/browse/PHOENIX-5359
> Project: Phoenix
>  Issue Type: Bug
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Minor
> Attachments: PHOENIX-5359.4.x-hbase-1.3.v1.patch, 
> PHOENIX-5359.master.v1.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5359) Remove (Global-Indexing)new coprocessors in CQSI#addCoprocessors with flag(INDEX_REGION_OBSERVER_ENABLED_ATTRIB) disabled

2019-06-20 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5359:

Attachment: PHOENIX-5359.master.v1.patch

> Remove (Global-Indexing)new coprocessors in CQSI#addCoprocessors with 
> flag(INDEX_REGION_OBSERVER_ENABLED_ATTRIB) disabled 
> --
>
> Key: PHOENIX-5359
> URL: https://issues.apache.org/jira/browse/PHOENIX-5359
> Project: Phoenix
>  Issue Type: Bug
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Minor
> Attachments: PHOENIX-5359.master.v1.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5359) Remove (Global-Indexing)new coprocessors in CQSI#addCoprocessors with flag(INDEX_REGION_OBSERVER_ENABLED_ATTRIB) disabled

2019-06-19 Thread Swaroopa Kadam (JIRA)
Swaroopa Kadam created PHOENIX-5359:
---

 Summary: Remove (Global-Indexing)new coprocessors in 
CQSI#addCoprocessors with flag(INDEX_REGION_OBSERVER_ENABLED_ATTRIB) disabled 
 Key: PHOENIX-5359
 URL: https://issues.apache.org/jira/browse/PHOENIX-5359
 Project: Phoenix
  Issue Type: Bug
Reporter: Swaroopa Kadam
Assignee: Swaroopa Kadam






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5333) A tool to upgrade existing tables/indexes to use self-consistent global indexes design

2019-06-11 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5333:

Description: A tool to upgrade existing tables/indexes to use 
self-consistent global indexes design in PHOENIX-5156  (was: A tool to upgrade 
existing tables/indexes to use self-consistent global indexes design)

> A tool to upgrade existing tables/indexes to use self-consistent global 
> indexes design
> --
>
> Key: PHOENIX-5333
> URL: https://issues.apache.org/jira/browse/PHOENIX-5333
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: Swaroopa Kadam
>Priority: Major
>
> A tool to upgrade existing tables/indexes to use self-consistent global 
> indexes design in PHOENIX-5156



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5333) A tool to upgrade existing tables/indexes to use self-consistent global indexes design

2019-06-11 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5333:

Summary: A tool to upgrade existing tables/indexes to use self-consistent 
global indexes design  (was: Tool to upgrade existing tables/indexes to use 
self-consistent global indexes design)

> A tool to upgrade existing tables/indexes to use self-consistent global 
> indexes design
> --
>
> Key: PHOENIX-5333
> URL: https://issues.apache.org/jira/browse/PHOENIX-5333
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: Swaroopa Kadam
>Priority: Major
>
> A tool to upgrade existing tables/indexes to use self-consistent global 
> indexes design



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5333) Tool to upgrade existing tables/indexes to use self-consistent global indexes design

2019-06-11 Thread Swaroopa Kadam (JIRA)
Swaroopa Kadam created PHOENIX-5333:
---

 Summary: Tool to upgrade existing tables/indexes to use 
self-consistent global indexes design
 Key: PHOENIX-5333
 URL: https://issues.apache.org/jira/browse/PHOENIX-5333
 Project: Phoenix
  Issue Type: Bug
Reporter: Swaroopa Kadam


A tool to upgrade existing tables/indexes to use self-consistent global indexes 
design



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5333) Tool to upgrade existing tables/indexes to use self-consistent global indexes design

2019-06-11 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5333:

Issue Type: Improvement  (was: Bug)

> Tool to upgrade existing tables/indexes to use self-consistent global indexes 
> design
> 
>
> Key: PHOENIX-5333
> URL: https://issues.apache.org/jira/browse/PHOENIX-5333
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: Swaroopa Kadam
>Priority: Major
>
> A tool to upgrade existing tables/indexes to use self-consistent global 
> indexes design



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: ArrayIndexOutOfBound while upserting in a view with index on a base table with index

2019-06-07 Thread swaroopa kadam
Filed a jira: https://issues.apache.org/jira/browse/PHOENIX-5322

Thank you.

On Wed, Jun 5, 2019 at 10:01 PM swaroopa kadam 
wrote:

> Hi,
>
> I am trying to upsert a row in a view created on a base table which has
> indexes, also the view has index too. I am getting ArrayIndexOutOfBound
> exception from IndexMaintainer class. Without looking at the code -- my
> hypothesis is code might be trying to update indexes on the base table as
> well for the new row?
>
> java.lang.ArrayIndexOutOfBoundsException: -1
> at java.util.ArrayList.elementData(ArrayList.java:422)
> at java.util.ArrayList.get(ArrayList.java:435)
> at
> org.apache.phoenix.index.IndexMaintainer.initCachedState(IndexMaintainer.java:1619)
> at
> org.apache.phoenix.index.IndexMaintainer.(IndexMaintainer.java:558)
>
> Do we not allow this?
> I am on Phoenix version 4.13 .
>
> Thank you for the help.
>
> --
>
>
> Swaroopa Kadam
> [image: https://]about.me/swaroopa_kadam
> <https://about.me/swaroopa_kadam?promo=email_sig&utm_source=product&utm_medium=email_sig&utm_campaign=gmail_api>
>


-- 


Swaroopa Kadam
[image: https://]about.me/swaroopa_kadam
<https://about.me/swaroopa_kadam?promo=email_sig&utm_source=product&utm_medium=email_sig&utm_campaign=gmail_api>


[jira] [Updated] (PHOENIX-5322) Upsert on a view of an indexed table fails with ArrayIndexOutOfBound Exception

2019-06-07 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5322:

Summary: Upsert on a view of an indexed table fails with 
ArrayIndexOutOfBound Exception  (was: Upsert on a view with indexed table fails 
with ArrayIndexOutOfBound Exception)

> Upsert on a view of an indexed table fails with ArrayIndexOutOfBound Exception
> --
>
> Key: PHOENIX-5322
> URL: https://issues.apache.org/jira/browse/PHOENIX-5322
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.2
>        Reporter: Swaroopa Kadam
>Priority: Major
>
> {code:java}
> // code placeholder
> public void testUpstertOnViewWithIndexedTable() throws SQLException {
>Properties prop = new Properties();
>Connection conn = DriverManager.getConnection(getUrl(), prop);
>conn.setAutoCommit(true);
>conn.createStatement().execute("CREATE TABLE IF NOT EXISTS us_population 
> (\n" +
>  "  state CHAR(2) NOT NULL,\n" +
>  "  city VARCHAR NOT NULL,\n" +
>  "  population BIGINT,\n" +
>  "  CONSTRAINT my_pk PRIMARY KEY (state, city)) 
> COLUMN_ENCODED_BYTES=0");
>PreparedStatement ps = conn.prepareStatement("UPSERT INTO us_population 
> VALUES('NY','New York',8143197)");
>ps.executeUpdate();
>ps = conn.prepareStatement("UPSERT INTO us_population VALUES('CA','Los 
> Angeles',3844829)");
>ps.executeUpdate();
>ps = conn.prepareStatement("UPSERT INTO us_population 
> VALUES('IL','Chicago',2842518)");
>ps.executeUpdate();
>ps = conn.prepareStatement("UPSERT INTO us_population 
> VALUES('TX','Houston',2016582)");
>ps.executeUpdate();
>ps = conn.prepareStatement("UPSERT INTO us_population 
> VALUES('PA','Philadelphia',1463281)");
>ps.executeUpdate();
>ps = conn.prepareStatement("UPSERT INTO us_population 
> VALUES('AZ','Phoenix',1461575)");
>ps.executeUpdate();
>ps = conn.prepareStatement("UPSERT INTO us_population VALUES('TX','San 
> Antonio',1256509)");
>ps.executeUpdate();
>ps = conn.prepareStatement("UPSERT INTO us_population VALUES('CA','San 
> Diego',1255540)");
>ps.executeUpdate();
>ps = conn.prepareStatement("UPSERT INTO us_population 
> VALUES('TX','Dallas',1213825)");
>ps.executeUpdate();
>ps = conn.prepareStatement("UPSERT INTO us_population VALUES('CA','San 
> Jose',912332)");
>ps.executeUpdate();
>conn.createStatement().execute("CREATE VIEW IF NOT EXISTS 
> us_population_gv" +
>  "(city_area INTEGER, avg_fam_size INTEGER) AS " +
>  "SELECT * FROM us_population WHERE state = 'CA'");
>conn.createStatement().execute("CREATE INDEX IF NOT EXISTS 
> us_population_gv_gi ON " +
>  "us_population_gv (city_area) INCLUDE (population)");
>conn.createStatement().execute("CREATE INDEX IF NOT EXISTS 
> us_population_gi ON " +
>  "us_population (population)");
>ps = conn.prepareStatement("UPSERT INTO 
> us_population_gv(state,city,population,city_area,avg_fam_size) " +
> "VALUES('CA','Santa Barbara',912332,1300,4)");
>ps.executeUpdate();
> }
> {code}
> Exception: 
> java.lang.ArrayIndexOutOfBoundsException: -1
>   at java.util.ArrayList.elementData(ArrayList.java:422)
>   at java.util.ArrayList.get(ArrayList.java:435)
>   at 
> org.apache.phoenix.index.IndexMaintainer.initCachedState(IndexMaintainer.java:1631)
>   at 
> org.apache.phoenix.index.IndexMaintainer.(IndexMaintainer.java:564)
>   at 
> org.apache.phoenix.index.IndexMaintainer.create(IndexMaintainer.java:144)
>   at 
> org.apache.phoenix.schema.PTableImpl.getIndexMaintainer(PTableImpl.java:1499)
>   at 
> org.apache.phoenix.index.IndexMaintainer.serialize(IndexMaintainer.java:226)
>   at 
> org.apache.phoenix.index.IndexMaintainer.serializeServerMaintainedIndexes(IndexMaintainer.java:203)
>   at 
> org.apache.phoenix.index.IndexMaintainer.serialize(IndexMaintainer.java:187)
>   at 
> org.apache.phoenix.schema.PTableIm

[jira] [Updated] (PHOENIX-5322) Upsert on a view with indexed table fails with ArrayIndexOutOfBound Exception

2019-06-07 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5322:

Description: 
{code:java}
// code placeholder
public void testUpstertOnViewWithIndexedTable() throws SQLException {

   Properties prop = new Properties();
   Connection conn = DriverManager.getConnection(getUrl(), prop);
   conn.setAutoCommit(true);
   conn.createStatement().execute("CREATE TABLE IF NOT EXISTS us_population 
(\n" +
 "  state CHAR(2) NOT NULL,\n" +
 "  city VARCHAR NOT NULL,\n" +
 "  population BIGINT,\n" +
 "  CONSTRAINT my_pk PRIMARY KEY (state, city)) 
COLUMN_ENCODED_BYTES=0");

   PreparedStatement ps = conn.prepareStatement("UPSERT INTO us_population 
VALUES('NY','New York',8143197)");
   ps.executeUpdate();
   ps = conn.prepareStatement("UPSERT INTO us_population VALUES('CA','Los 
Angeles',3844829)");
   ps.executeUpdate();
   ps = conn.prepareStatement("UPSERT INTO us_population 
VALUES('IL','Chicago',2842518)");
   ps.executeUpdate();
   ps = conn.prepareStatement("UPSERT INTO us_population 
VALUES('TX','Houston',2016582)");
   ps.executeUpdate();
   ps = conn.prepareStatement("UPSERT INTO us_population 
VALUES('PA','Philadelphia',1463281)");
   ps.executeUpdate();
   ps = conn.prepareStatement("UPSERT INTO us_population 
VALUES('AZ','Phoenix',1461575)");
   ps.executeUpdate();
   ps = conn.prepareStatement("UPSERT INTO us_population VALUES('TX','San 
Antonio',1256509)");
   ps.executeUpdate();
   ps = conn.prepareStatement("UPSERT INTO us_population VALUES('CA','San 
Diego',1255540)");
   ps.executeUpdate();
   ps = conn.prepareStatement("UPSERT INTO us_population 
VALUES('TX','Dallas',1213825)");
   ps.executeUpdate();
   ps = conn.prepareStatement("UPSERT INTO us_population VALUES('CA','San 
Jose',912332)");
   ps.executeUpdate();

   conn.createStatement().execute("CREATE VIEW IF NOT EXISTS us_population_gv" +
 "(city_area INTEGER, avg_fam_size INTEGER) AS " +
 "SELECT * FROM us_population WHERE state = 'CA'");

   conn.createStatement().execute("CREATE INDEX IF NOT EXISTS 
us_population_gv_gi ON " +
 "us_population_gv (city_area) INCLUDE (population)");

   conn.createStatement().execute("CREATE INDEX IF NOT EXISTS us_population_gi 
ON " +
 "us_population (population)");
   ps = conn.prepareStatement("UPSERT INTO 
us_population_gv(state,city,population,city_area,avg_fam_size) " +
"VALUES('CA','Santa Barbara',912332,1300,4)");

   ps.executeUpdate();
}
{code}

Exception: 

java.lang.ArrayIndexOutOfBoundsException: -1

at java.util.ArrayList.elementData(ArrayList.java:422)
at java.util.ArrayList.get(ArrayList.java:435)
at 
org.apache.phoenix.index.IndexMaintainer.initCachedState(IndexMaintainer.java:1631)
at 
org.apache.phoenix.index.IndexMaintainer.(IndexMaintainer.java:564)
at 
org.apache.phoenix.index.IndexMaintainer.create(IndexMaintainer.java:144)
at 
org.apache.phoenix.schema.PTableImpl.getIndexMaintainer(PTableImpl.java:1499)
at 
org.apache.phoenix.index.IndexMaintainer.serialize(IndexMaintainer.java:226)
at 
org.apache.phoenix.index.IndexMaintainer.serializeServerMaintainedIndexes(IndexMaintainer.java:203)
at 
org.apache.phoenix.index.IndexMaintainer.serialize(IndexMaintainer.java:187)
at 
org.apache.phoenix.schema.PTableImpl.getIndexMaintainers(PTableImpl.java:1511)
at org.apache.phoenix.execute.MutationState.send(MutationState.java:963)
at 
org.apache.phoenix.execute.MutationState.send(MutationState.java:1432)
at 
org.apache.phoenix.execute.MutationState.commit(MutationState.java:1255)
at 
org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:673)
at 
org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:669)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:669)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:412)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:392)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:391)
at 
org.apache.phoenix.jdbc.PhoenixStatement.exec

[jira] [Created] (PHOENIX-5322) Upsert on a view with indexed table fails with ArrayIndexOutOfBound Exception

2019-06-07 Thread Swaroopa Kadam (JIRA)
Swaroopa Kadam created PHOENIX-5322:
---

 Summary: Upsert on a view with indexed table fails with 
ArrayIndexOutOfBound Exception
 Key: PHOENIX-5322
 URL: https://issues.apache.org/jira/browse/PHOENIX-5322
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.2
Reporter: Swaroopa Kadam


{code:java}
// code placeholder
public void testUpstertOnViewWithIndexedTable() throws SQLException {

   Properties prop = new Properties();
   Connection conn = DriverManager.getConnection(getUrl(), prop);
   conn.setAutoCommit(true);
   conn.createStatement().execute("CREATE TABLE IF NOT EXISTS us_population 
(\n" +
 "  state CHAR(2) NOT NULL,\n" +
 "  city VARCHAR NOT NULL,\n" +
 "  population BIGINT,\n" +
 "  CONSTRAINT my_pk PRIMARY KEY (state, city)) 
COLUMN_ENCODED_BYTES=0");

   PreparedStatement ps = conn.prepareStatement("UPSERT INTO us_population 
VALUES('NY','New York',8143197)");
   ps.executeUpdate();
   ps = conn.prepareStatement("UPSERT INTO us_population VALUES('CA','Los 
Angeles',3844829)");
   ps.executeUpdate();
   ps = conn.prepareStatement("UPSERT INTO us_population 
VALUES('IL','Chicago',2842518)");
   ps.executeUpdate();
   ps = conn.prepareStatement("UPSERT INTO us_population 
VALUES('TX','Houston',2016582)");
   ps.executeUpdate();
   ps = conn.prepareStatement("UPSERT INTO us_population 
VALUES('PA','Philadelphia',1463281)");
   ps.executeUpdate();
   ps = conn.prepareStatement("UPSERT INTO us_population 
VALUES('AZ','Phoenix',1461575)");
   ps.executeUpdate();
   ps = conn.prepareStatement("UPSERT INTO us_population VALUES('TX','San 
Antonio',1256509)");
   ps.executeUpdate();
   ps = conn.prepareStatement("UPSERT INTO us_population VALUES('CA','San 
Diego',1255540)");
   ps.executeUpdate();
   ps = conn.prepareStatement("UPSERT INTO us_population 
VALUES('TX','Dallas',1213825)");
   ps.executeUpdate();
   ps = conn.prepareStatement("UPSERT INTO us_population VALUES('CA','San 
Jose',912332)");
   ps.executeUpdate();

   conn.createStatement().execute("CREATE VIEW IF NOT EXISTS us_population_gv" +
 "(city_area INTEGER, avg_fam_size INTEGER) AS " +
 "SELECT * FROM us_population WHERE state = 'CA'");

   conn.createStatement().execute("CREATE INDEX IF NOT EXISTS 
us_population_gv_gi ON " +
 "us_population_gv (city_area) INCLUDE (population)");

   conn.createStatement().execute("CREATE INDEX IF NOT EXISTS us_population_gi 
ON " +
 "us_population (population)");
   ps = conn.prepareStatement("UPSERT INTO 
us_population_gv(state,city,population,city_area,avg_fam_size) " +
"VALUES('CA','Santa Barbara',912332,1300,4)");

   ps.executeUpdate();
}
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


ArrayIndexOutOfBound while upserting in a view with index on a base table with index

2019-06-05 Thread swaroopa kadam
Hi,

I am trying to upsert a row in a view created on a base table which has
indexes, also the view has index too. I am getting ArrayIndexOutOfBound
exception from IndexMaintainer class. Without looking at the code -- my
hypothesis is code might be trying to update indexes on the base table as
well for the new row?

java.lang.ArrayIndexOutOfBoundsException: -1
at java.util.ArrayList.elementData(ArrayList.java:422)
at java.util.ArrayList.get(ArrayList.java:435)
at
org.apache.phoenix.index.IndexMaintainer.initCachedState(IndexMaintainer.java:1619)
at org.apache.phoenix.index.IndexMaintainer.(IndexMaintainer.java:558)

Do we not allow this?
I am on Phoenix version 4.13 .

Thank you for the help.

-- 


Swaroopa Kadam
[image: https://]about.me/swaroopa_kadam
<https://about.me/swaroopa_kadam?promo=email_sig&utm_source=product&utm_medium=email_sig&utm_campaign=gmail_api>


[jira] [Assigned] (PHOENIX-2340) Index creation on multi tenant table causes exception if tenant ID column referenced

2019-06-02 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-2340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-2340:
---

Assignee: (was: Swaroopa Kadam)

> Index creation on multi tenant table causes exception if tenant ID column 
> referenced
> 
>
> Key: PHOENIX-2340
> URL: https://issues.apache.org/jira/browse/PHOENIX-2340
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Priority: Major
>
> If an index is attempted to be created on a multi-tenant table, an error 
> occurs if the tenant ID column is referenced in the indexed columns. This is 
> because it's already automatically included. However, it should not be an 
> error if the user references it (as long as it's the first indexed column).
> To repro:
> {code}
> CREATE TABLE IF NOT EXISTS T (
> ORGANIZATION_ID CHAR(15) NOT NULL,
> NETWORK_ID CHAR(15) NOT NULL,
> SUBJECT_ID CHAR(15) NOT NULL,
> RUN_ID CHAR(15) NOT NULL,
> SCORE DOUBLE,
> TOPIC_ID CHAR(15) NOT NULL
> CONSTRAINT PK PRIMARY KEY (
> ORGANIZATION_ID,
> NETWORK_ID,
> SUBJECT_ID,
> RUN_ID,
> TOPIC_ID
> )
> ) MULTI_TENANT=TRUE;
> CREATE INDEX IDX ON T (
> ORGANIZATION_ID,
> NETWORK_ID,
> TOPIC_ID,
> RUN_ID,
> SCORE
> ) INCLUDE (
> SUBJECT_ID
> );
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-675) Support specifying index details at the time of CREATE TABLE query

2019-06-02 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-675:
--

Assignee: (was: Swaroopa Kadam)

> Support specifying index details at the time of CREATE TABLE query
> --
>
> Key: PHOENIX-675
> URL: https://issues.apache.org/jira/browse/PHOENIX-675
> Project: Phoenix
>  Issue Type: Task
>Reporter: chrajeshbabu
>
> We can support specifying index details during table creation as well(which 
> is supported in some databases). This also helps in Hindex integration where 
> we can avoid unnecessary disable and enable of table every time while 
> creating index.
> Ex:
> CREATE TABLE test (
> id INT NOT NULL,
> last_name  CHAR(30) NOT NULL,
> first_name CHAR(30) NOT NULL,
> PRIMARY KEY (id),
> INDEX name (last_name,first_name)
> );



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] Maintaining the Site in Git Instead of SVN

2019-05-30 Thread swaroopa kadam
Huge +1!

On Thu, May 30, 2019 at 4:38 PM William Shen 
wrote:

> Hi all,
>
> Currently, the Phoenix site is maintained in and built from SVN
> <https://svn.apache.org/repos/asf/phoenix/site>. Not sure what level of
> work it would require, but does it make sense to move the source from svn
> to git, so contribution to the website can follow the same JIRA/git
> workflow as the rest of the project? It could also make sure changes to
> Phoenix code are checked in with corresponding documentation changes when
> needed.
>
> - Will
>
-- 


Swaroopa Kadam
[image: https://]about.me/swaroopa_kadam
<https://about.me/swaroopa_kadam?promo=email_sig&utm_source=product&utm_medium=email_sig&utm_campaign=gmail_api>


[jira] [Assigned] (PHOENIX-5220) Create table fails when using the same connection after schema upgrade

2019-05-29 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-5220:
---

Assignee: (was: Swaroopa Kadam)

> Create table fails when using the same connection after schema upgrade
> --
>
> Key: PHOENIX-5220
> URL: https://issues.apache.org/jira/browse/PHOENIX-5220
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0, 4.14.1
>Reporter: Jacob Isaac
>Priority: Major
> Attachments: Screen Shot 2019-03-28 at 9.37.23 PM.png
>
>
> Steps:
> 1. Try to upgrade system.catalog from 4.10  to 4.13
> 2. Run Execute Upgrade
> 3. Creating a table will fail with the following exception -
> org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): 
> Undefined column. columnName=SYSTEM.CATALOG.USE_STATS_FOR_PARALLELIZATION
>   at 
> org.apache.phoenix.schema.PTableImpl.getColumnForColumnName(PTableImpl.java:828)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.resolveColumn(FromCompiler.java:475)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:450)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:755)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:741)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:389)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:377)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.access$700(PhoenixStatement.java:208)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:418)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:377)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:272)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2665)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1097)
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$1.execute(CreateTableCompiler.java:192)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:377)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.access$700(PhoenixStatement.java:208)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:418)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:377)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1775)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:807)
>   at sqlline.SqlLine.begin(SqlLine.java:681)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [ANNOUNCE] New committer Swaroopa Kadam

2019-05-29 Thread swaroopa kadam
Thank you everyone! Excited for my further contribution in the community. 😊

On Tue, May 28, 2019 at 4:56 PM Vincent Poon  wrote:

> Congrats Swaroopa, looking forward to more patches
>
> On Tue, May 28, 2019 at 4:33 PM Xu Cang 
> wrote:
>
> > Congrats! :)
> >
> > On Tue, May 28, 2019 at 4:18 PM Priyank Porwal 
> > wrote:
> >
> > > Congrats Swaroopa!
> > >
> > > On Tue, May 28, 2019, 3:24 PM Andrew Purtell 
> > wrote:
> > >
> > > > Congratulations Swaroopa!
> > > >
> > > > On Tue, May 28, 2019 at 2:38 PM Geoffrey Jacoby 
> > > > wrote:
> > > >
> > > > > On behalf of the Apache Phoenix PMC, I am pleased to announce that
> > > > Swaroopa
> > > > > Kadam has accepted our invitation to become a Phoenix committer.
> > > Swaroopa
> > > > > has contributed to a number of areas in the project, including the
> > > query
> > > > > server[1] and been an active participant in many code reviews for
> > > others'
> > > > > patches.
> > > > >
> > > > > Congratulations, Swaroopa, and we look forward to many more great
> > > > > contributions from you!
> > > > >
> > > > > Geoffrey Jacoby
> > > > >
> > > > > [1] -
> > > > >
> > > > >
> > > >
> > >
> >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20PHOENIX%20AND%20status%20%3D%20Resolved%20AND%20assignee%20in%20(swaroopa)
> > > > >
> > > >
> > > >
> > > > --
> > > > Best regards,
> > > > Andrew
> > > >
> > > > Words like orphans lost among the crosstalk, meaning torn from
> truth's
> > > > decrepit hands
> > > >- A23, Crosstalk
> > > >
> > >
> >
>
-- 


Swaroopa Kadam
[image: https://]about.me/swaroopa_kadam
<https://about.me/swaroopa_kadam?promo=email_sig&utm_source=product&utm_medium=email_sig&utm_campaign=gmail_api>


[jira] [Created] (PHOENIX-5294) getIndexes() on a view should return only the indexes on that view

2019-05-22 Thread Swaroopa Kadam (JIRA)
Swaroopa Kadam created PHOENIX-5294:
---

 Summary: getIndexes() on a view should return only the indexes on 
that view
 Key: PHOENIX-5294
 URL: https://issues.apache.org/jira/browse/PHOENIX-5294
 Project: Phoenix
  Issue Type: Bug
Reporter: Swaroopa Kadam
Assignee: Swaroopa Kadam


Currently, if I have a table (with indexes on it) and a global view (also an 
index on it), getIndexes() on a view PTable returns indexes on the base table 
as well with the duplicate entries. 

Semantically, it makes more sense to only return indexes on the view. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5255) Create Orchestrator for PQS using PhoenixCanaryTool in phoenix-queryserver project

2019-05-18 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam resolved PHOENIX-5255.
-
Resolution: Resolved

Merged in phoenix-queryserver git repo. 

> Create Orchestrator for PQS using PhoenixCanaryTool in phoenix-queryserver 
> project
> --
>
> Key: PHOENIX-5255
> URL: https://issues.apache.org/jira/browse/PHOENIX-5255
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This orchestrator will execute PhoenixCanaryTool at every configured interval 
> and execute UPSERT/READ via PQS client. 
> This is mainly to demo/exemplify usability of PQS



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5288) Add ITs to phoenix-queryserver repo for new canary-orchestrator

2019-05-18 Thread Swaroopa Kadam (JIRA)
Swaroopa Kadam created PHOENIX-5288:
---

 Summary: Add ITs to phoenix-queryserver repo for new 
canary-orchestrator
 Key: PHOENIX-5288
 URL: https://issues.apache.org/jira/browse/PHOENIX-5288
 Project: Phoenix
  Issue Type: Improvement
Reporter: Swaroopa Kadam


Use the mini cluster and running queryserver to write ITs. Especially simulate 
an env where more than one host runs the orchestrator to strongly handle 
curator related edge cases. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5283) Add CASCADE INDEX ALL in the SQL Grammar of ALTER TABLE ADD

2019-05-16 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5283:

Summary: Add CASCADE INDEX ALL in the SQL Grammar of ALTER TABLE ADD   
(was: Add CASCADE ALL in the SQL Grammar of ALTER TABLE ADD )

> Add CASCADE INDEX ALL in the SQL Grammar of ALTER TABLE ADD 
> 
>
> Key: PHOENIX-5283
> URL: https://issues.apache.org/jira/browse/PHOENIX-5283
> Project: Phoenix
>  Issue Type: Improvement
>    Reporter: Swaroopa Kadam
>        Assignee: Swaroopa Kadam
>Priority: Major
>
> Include following support in the grammar. 
> ALTER TABLE ADD CASCADE <(comma separated list of indexes) | ALL > IF NOT 
> EXISTS  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5283) Add CASCADE ALL in the SQL Grammar of ALTER TABLE ADD

2019-05-15 Thread Swaroopa Kadam (JIRA)
Swaroopa Kadam created PHOENIX-5283:
---

 Summary: Add CASCADE ALL in the SQL Grammar of ALTER TABLE ADD 
 Key: PHOENIX-5283
 URL: https://issues.apache.org/jira/browse/PHOENIX-5283
 Project: Phoenix
  Issue Type: Improvement
Reporter: Swaroopa Kadam
Assignee: Swaroopa Kadam


Include following support in the grammar. 

ALTER TABLE ADD CASCADE <(comma separated list of indexes) | ALL > IF NOT 
EXISTS  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5276) Update Multi-tenancy section of the website to run sqlline by passing tenant id

2019-05-08 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5276:

Labels: newbie  (was: )

> Update Multi-tenancy section of the website to run sqlline by passing tenant 
> id
> ---
>
> Key: PHOENIX-5276
> URL: https://issues.apache.org/jira/browse/PHOENIX-5276
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: Swaroopa Kadam
>Priority: Minor
>  Labels: newbie
>
> Currently, it has instructions to create tenant-specific connection using 
> Java application and create table syntax doesn't include 2 or more columns in 
> the PK. 
>  
> Update the website to include following sqlline command:
> ./bin/sqlline.py "localhost:2181;TenantId=abc"
> modify the example of create table to include primary key constraint. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5276) Update Multi-tenancy section of the website to run sqlline by passing tenant id

2019-05-08 Thread Swaroopa Kadam (JIRA)
Swaroopa Kadam created PHOENIX-5276:
---

 Summary: Update Multi-tenancy section of the website to run 
sqlline by passing tenant id
 Key: PHOENIX-5276
 URL: https://issues.apache.org/jira/browse/PHOENIX-5276
 Project: Phoenix
  Issue Type: Improvement
Reporter: Swaroopa Kadam


Currently, it has instructions to create tenant-specific connection using Java 
application and create table syntax doesn't include 2 or more columns in the 
PK. 

 

Update the website to include following sqlline command:

./bin/sqlline.py "localhost:2181;TenantId=abc"

modify the example of create table to include primary key constraint. 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5220) Create table fails when using the same connection after schema upgrade

2019-05-06 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-5220:
---

Assignee: Swaroopa Kadam

> Create table fails when using the same connection after schema upgrade
> --
>
> Key: PHOENIX-5220
> URL: https://issues.apache.org/jira/browse/PHOENIX-5220
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0, 4.14.1
>Reporter: Jacob Isaac
>    Assignee: Swaroopa Kadam
>Priority: Major
> Attachments: Screen Shot 2019-03-28 at 9.37.23 PM.png
>
>
> Steps:
> 1. Try to upgrade system.catalog from 4.10  to 4.13
> 2. Run Execute Upgrade
> 3. Creating a table will fail with the following exception -
> org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): 
> Undefined column. columnName=SYSTEM.CATALOG.USE_STATS_FOR_PARALLELIZATION
>   at 
> org.apache.phoenix.schema.PTableImpl.getColumnForColumnName(PTableImpl.java:828)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.resolveColumn(FromCompiler.java:475)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:450)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:755)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:741)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:389)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:377)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.access$700(PhoenixStatement.java:208)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:418)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:377)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:272)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
>   at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2665)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1097)
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$1.execute(CreateTableCompiler.java:192)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:377)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.access$700(PhoenixStatement.java:208)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:418)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:379)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:377)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:366)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1775)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:807)
>   at sqlline.SqlLine.begin(SqlLine.java:681)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5192) "Parameter value unbound" thrown when use PrepareStatement to getParamMetaData

2019-05-03 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-5192:
---

Assignee: Swaroopa Kadam

> "Parameter value unbound" thrown when use PrepareStatement to getParamMetaData
> --
>
> Key: PHOENIX-5192
> URL: https://issues.apache.org/jira/browse/PHOENIX-5192
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
>Reporter: shining
>Assignee: Swaroopa Kadam
>Priority: Major
> Attachments: CreateTable
>
>
> I use QueryServer in phoenix 5.x version, Test code is as follows:
> {code:java}
> Connection connection = 
> DriverManager.getConnection("jdbc:phoenix:thin:http://localhost:8765;serialization=PROTOBUF";);
> PreparedStatement ptst = connection.prepareStatement("SELECT A.EVIDENCE_ID, 
> A.APP_ID, A.APP_NAME FROM BIZ_DATA A JOIN EVIDENCE_AUTH_RECORD B ON 
> A.EVIDENCE_ID=B.EVIDENCE_ID WHERE B.APP_ID in (?) AND A.APP_ID in (?)");
> ptst.setString(1, "");
> ptst.setString(2, "1234");
> ptst.executeQuery();
> {code}
> this will throws Exception:
> ~
> {color:red}java.lang.RuntimeException: java.sql.SQLException: ERROR 2004 
> (INT05): Parameter value unbound. Parameter at index 1 is unbound
>   at org.apache.calcite.avatica.jdbc.JdbcMeta.propagate(JdbcMeta.java:700)
>   at org.apache.calcite.avatica.jdbc.JdbcMeta.prepare(JdbcMeta.java:726)
>   at 
> org.apache.calcite.avatica.jdbc.PhoenixJdbcMeta.prepare(PhoenixJdbcMeta.java:54)
>   at 
> org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:195)
>   at 
> org.apache.calcite.avatica.remote.Service$PrepareRequest.accept(Service.java:1215)
>   at 
> org.apache.calcite.avatica.remote.Service$PrepareRequest.accept(Service.java:1186)
>   at 
> org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:94)
>   at 
> org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
>   at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:127)
>   at 
> org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:534)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.sql.SQLException: ERROR 2004 (INT05): Parameter value 
> unbound. Parameter at index 1 is unbound
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:518)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.jdbc.PhoenixParameterMetaData.getParam(PhoenixParameterMetaData.java:89)
>   at 
> org.apache.phoenix.jdbc.PhoenixParameterMetaData.isSigned(PhoenixParameterMetaData.java:138)
>   at 
> org.apache.calcite.avatica.jdbc.JdbcMeta.parameters(JdbcMeta.java:276)
>   at org.apache.calcite.avatica.jdbc.JdbcMeta.signature(JdbcMeta.java:288)
>   at org.apache.calcite.avatica.jdbc.JdbcMeta.prepare(JdbcMeta.java:721)
>   ... 21 more{color}
> ~
> create table sql attached!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-3430) Optimizer not using all columns from secondary index

2019-04-30 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-3430:
---

Assignee: Swaroopa Kadam

> Optimizer not using all columns from secondary index
> 
>
> Key: PHOENIX-3430
> URL: https://issues.apache.org/jira/browse/PHOENIX-3430
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Zhenhua Xu
>    Assignee: Swaroopa Kadam
>Priority: Major
>
>  Setup =
> DROP TABLE IF EXISTS TEST.TEMP;
> CREATE TABLE TEST.TEMP (
> ORGANIZATION_ID CHAR(15) NOT NULL,
> NETWORK_ID CHAR(15) NOT NULL,
> ENTITY_ID CHAR(15) NOT NULL,
> SCORE DOUBLE
> CONSTRAINT TOP_ENTITY_PK PRIMARY KEY (
> ORGANIZATION_ID,
> NETWORK_ID,
> ENTITY_ID
> )
> ) VERSIONS=1;
> CREATE INDEX IF NOT EXISTS TEMP_INDEX ON TEST.TEMP (ORGANIZATION_ID, 
> NETWORK_ID, SCORE DESC, ENTITY_ID DESC);
> EXPLAIN
> SELECT entity_id, MAX(score) FROM TEST.TEMP
> WHERE organization_id = 'organization_id'
>   AND (network_id = 'network_id' OR network_id='network_id1')
>   AND ((score = 9.0 AND entity_id < 'entity_id') OR score < 9.0)
> GROUP BY entity_id
> ORDER BY MAX(score) DESC, entity_id DESC
> LIMIT 100;
> === Execution Plan ===
> -CLIENT 1-CHUNK PARALLEL 1-WAY SKIP SCAN ON 2 KEYS OVER TEST.TEMP_INDEX 
> ['organization_id','network_id '] - ['organization_id','network_id1']
> --SERVER FILTER BY FIRST KEY ONLY AND ((TO_DOUBLE("SCORE") = 9.0 AND 
> "ENTITY_ID" < 'entity_id') OR TO_DOUBLE("SCORE") < 9.0)
> --SERVER AGGREGATE INTO DISTINCT ROWS BY ["ENTITY_ID"]
> -CLIENT MERGE SORT
> -CLIENT TOP 100 ROWS SORTED BY [MAX(TO_DOUBLE("SCORE")) DESC, "ENTITY_ID" 
> DESC]
> The execution plan shows a server-side skip scans using only the first 2 
> columns in the TEMP_INDEX secondary index. It could have used the SCORE and 
> ENTITY_ID columns  to speed up server side scans also.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-1955) Phoenix create table with salt_bucket,then csvload data,index table is null

2019-04-30 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-1955:
---

Assignee: Swaroopa Kadam

> Phoenix create table with salt_bucket,then csvload data,index table is null
> ---
>
> Key: PHOENIX-1955
> URL: https://issues.apache.org/jira/browse/PHOENIX-1955
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.0.0
> Environment: redhat6.4,hbase1.0.0,hadoop2.5.0
>Reporter: Alisa
>    Assignee: Swaroopa Kadam
>Priority: Major
>  Labels: vierfy
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> The test steps is as follows:
> 1.create table use salt_bucket ,then create local index on the table
> 2.use phoenix csvload to load data into table and index table
> 3.load complete,select count(1) from index,index table is null,but scan index 
> table from hbase shell,it is not null



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-2882) NPE during View creation for table with secondary index

2019-04-30 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-2882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-2882:
---

Assignee: Swaroopa Kadam

> NPE during View creation for table with secondary index 
> 
>
> Key: PHOENIX-2882
> URL: https://issues.apache.org/jira/browse/PHOENIX-2882
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sergey Soldatov
>        Assignee: Swaroopa Kadam
>Priority: Major
>
> A simple test case:
> {noformat}
> create table test (id integer primary key, i1 integer, i2 integer);
> create index i1 on test (i1);
> create view v1 as select * from test where i2 <10;
> {noformat}
> the thrown exception:
> {noformat}
> org.apache.phoenix.schema.ColumnNotFoundException: ERROR 504 (42703): 
> Undefined column. columnName=0:I2
>   at org.apache.phoenix.schema.PTableImpl.getColumn(PTableImpl.java:692)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.resolveColumn(FromCompiler.java:444)
>   at 
> org.apache.phoenix.compile.ExpressionCompiler.resolveColumn(ExpressionCompiler.java:366)
>   at 
> org.apache.phoenix.compile.WhereCompiler$WhereExpressionCompiler.resolveColumn(WhereCompiler.java:181)
>   at 
> org.apache.phoenix.compile.WhereCompiler$WhereExpressionCompiler.visit(WhereCompiler.java:169)
>   at 
> org.apache.phoenix.compile.WhereCompiler$WhereExpressionCompiler.visit(WhereCompiler.java:156)
>   at 
> org.apache.phoenix.parse.ColumnParseNode.accept(ColumnParseNode.java:56)
>   at 
> org.apache.phoenix.parse.CompoundParseNode.acceptChildren(CompoundParseNode.java:64)
>   at org.apache.phoenix.parse.CastParseNode.accept(CastParseNode.java:60)
>   at 
> org.apache.phoenix.parse.CompoundParseNode.acceptChildren(CompoundParseNode.java:64)
>   at 
> org.apache.phoenix.parse.ComparisonParseNode.accept(ComparisonParseNode.java:45)
>   at 
> org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:86)
>   at 
> org.apache.phoenix.util.IndexUtil.rewriteViewStatement(IndexUtil.java:494)
>   at 
> org.apache.phoenix.schema.MetaDataClient.addIndexesFromPhysicalTable(MetaDataClient.java:739)
>   at 
> org.apache.phoenix.schema.MetaDataClient.addTableToCache(MetaDataClient.java:3418)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2279)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:866)
>   at 
> org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:183)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:343)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:331)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:330)
> {noformat}
> View created, but any select using view cause a similar exception



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5255) Create Orchestrator for PQS using PhoenixCanaryTool in phoenix-queryserver project

2019-04-27 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5255:

Description: 
This orchestrator will execute PhoenixCanaryTool at every configured interval 
and execute UPSERT/READ via PQS client. 

This is mainly to demo/exemplify usability of PQS

  was:This orchestrator will execute PhoenixCanaryTool at every configured 
interval and execute UPSERT/READ via PQS client. 


> Create Orchestrator for PQS using PhoenixCanaryTool in phoenix-queryserver 
> project
> --
>
> Key: PHOENIX-5255
> URL: https://issues.apache.org/jira/browse/PHOENIX-5255
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This orchestrator will execute PhoenixCanaryTool at every configured interval 
> and execute UPSERT/READ via PQS client. 
> This is mainly to demo/exemplify usability of PQS



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5255) Create Orchestrator for PQS using PhoenixCanaryTool in phoenix-queryserver project

2019-04-27 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5255:

Description: This orchestrator will execute PhoenixCanaryTool at every 
configured interval and execute UPSERT/READ via PQS client.   (was: **This 
orchestrator will execute PhoenixCanaryTool at every configured interval using 
PQS client. )

> Create Orchestrator for PQS using PhoenixCanaryTool in phoenix-queryserver 
> project
> --
>
> Key: PHOENIX-5255
> URL: https://issues.apache.org/jira/browse/PHOENIX-5255
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This orchestrator will execute PhoenixCanaryTool at every configured interval 
> and execute UPSERT/READ via PQS client. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5255) Create Orchestrator for QueryServerCanaryTool in phoenix-queryserver project

2019-04-27 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5255:

Description: **This orchestrator will execute PhoenixCanaryTool at every 
configured interval using PQS client. 

> Create Orchestrator for QueryServerCanaryTool in phoenix-queryserver project
> 
>
> Key: PHOENIX-5255
> URL: https://issues.apache.org/jira/browse/PHOENIX-5255
> Project: Phoenix
>  Issue Type: Improvement
>    Reporter: Swaroopa Kadam
>        Assignee: Swaroopa Kadam
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> **This orchestrator will execute PhoenixCanaryTool at every configured 
> interval using PQS client. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5255) Create Orchestrator for PQS using PhoenixCanaryTool in phoenix-queryserver project

2019-04-27 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5255:

Summary: Create Orchestrator for PQS using PhoenixCanaryTool in 
phoenix-queryserver project  (was: Create Orchestrator for PhoenixCanaryTool in 
phoenix-queryserver project)

> Create Orchestrator for PQS using PhoenixCanaryTool in phoenix-queryserver 
> project
> --
>
> Key: PHOENIX-5255
> URL: https://issues.apache.org/jira/browse/PHOENIX-5255
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> **This orchestrator will execute PhoenixCanaryTool at every configured 
> interval using PQS client. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5255) Create Orchestrator for PhoenixCanaryTool in phoenix-queryserver project

2019-04-27 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5255:

Summary: Create Orchestrator for PhoenixCanaryTool in phoenix-queryserver 
project  (was: Create Orchestrator for QueryServerCanaryTool in 
phoenix-queryserver project)

> Create Orchestrator for PhoenixCanaryTool in phoenix-queryserver project
> 
>
> Key: PHOENIX-5255
> URL: https://issues.apache.org/jira/browse/PHOENIX-5255
> Project: Phoenix
>  Issue Type: Improvement
>    Reporter: Swaroopa Kadam
>        Assignee: Swaroopa Kadam
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> **This orchestrator will execute PhoenixCanaryTool at every configured 
> interval using PQS client. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5251) Avoid taking explicit lock by using AtomicReference in PhoenixAccessController class

2019-04-26 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5251:

Attachment: PHOENIX-5251.master.v1.patch

> Avoid taking explicit lock by using AtomicReference in 
> PhoenixAccessController class
> 
>
> Key: PHOENIX-5251
> URL: https://issues.apache.org/jira/browse/PHOENIX-5251
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Minor
> Attachments: PHOENIX-5251.4.x-HBase-1.3.v1.patch, 
> PHOENIX-5251.master.v1.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> By [~elserj] on PHOENIX-5070
> If we want to avoid taking an explicit lock, what about using AtomicReference 
> instead? Can we spin out another Jira issue to fix that?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5240) PhoenixResultWritable write wrong column to phoenix

2019-04-26 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-5240:
---

Assignee: (was: Swaroopa Kadam)

> PhoenixResultWritable write wrong column to phoenix
> ---
>
> Key: PHOENIX-5240
> URL: https://issues.apache.org/jira/browse/PHOENIX-5240
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: gabry
>Priority: Major
>
> When create hive external table mapped to phoenix table,PhoenixResultWritable 
> write record to phoenix by hive column index which may not match the index of 
> phoenix table. So PhoenixResultWritable write  wrong column to phoenix table 
> ,How can i solve it ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5087) Inner Join Cursor Query fails with NullPointerException - JoinCompiler.java:187

2019-04-26 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-5087:
---

Assignee: (was: Swaroopa Kadam)

> Inner Join Cursor Query fails with NullPointerException - 
> JoinCompiler.java:187
> ---
>
> Key: PHOENIX-5087
> URL: https://issues.apache.org/jira/browse/PHOENIX-5087
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Jack Steenkamp
>Priority: Major
> Attachments: PhoenixInnerJoinCursorTest.java
>
>
> I have come across an inner join query in my application that fails with the 
> NullPointerException if executed as part of a Cursor, but executes fine if 
> done without it. 
>   
>  To reproduce this issue, you can run the attached program (assuming you 
> update the JDBC_URL to point to an instance you have running) or you can 
> follow the steps below:
>   
>  Create the Table:
> {code:java}
> CREATE TABLE IF NOT EXISTS MY_STATS
> (
>    ID                   VARCHAR    NOT NULL,
>    ENTRY_NAME                     VARCHAR    ,
>    ENTRY_VALUE           DOUBLE     ,
>    TRANSACTION_TIME               TIMESTAMP  ,
>    CONSTRAINT pk PRIMARY KEY(ID)
> ) 
> IMMUTABLE_STORAGE_SCHEME=ONE_CELL_PER_COLUMN,
> UPDATE_CACHE_FREQUENCY=90,
> COLUMN_ENCODED_BYTES=NONE,
> IMMUTABLE_ROWS=true{code}
> Execute a normal query (this works fine):
> {code:java}
> SELECT * FROM MY_STATS
>    INNER JOIN 
>    (
>     SELECT ENTRY_NAME, MAX(TRANSACTION_TIME) AS TRANSACTION_TIME 
>  FROM MY_STATS 
>      GROUP BY ENTRY_NAME
>    ) sub
>    ON MY_STATS.ENTRY_NAME = sub.ENTRY_NAME AND MY_STATS.TRANSACTION_TIME = 
> sub.TRANSACTION_TIME 
> ORDER BY MY_STATS.TRANSACTION_TIME DESC  {code}
> Now if you execute the same query, but with the cursor declaration at the top 
> - 
> {code:java}
>  DECLARE MyCursor CURSOR FOR {code}
> It produces the following exception:
> {noformat}
> Exception in thread "main" java.lang.NullPointerException
>   at 
> org.apache.phoenix.compile.JoinCompiler$JoinTableConstructor.resolveTable(JoinCompiler.java:187)
>   at 
> org.apache.phoenix.compile.JoinCompiler$JoinTableConstructor.visit(JoinCompiler.java:224)
>   at 
> org.apache.phoenix.compile.JoinCompiler$JoinTableConstructor.visit(JoinCompiler.java:181)
>   at 
> org.apache.phoenix.parse.DerivedTableNode.accept(DerivedTableNode.java:49)
>   at 
> org.apache.phoenix.compile.JoinCompiler$JoinTableConstructor.visit(JoinCompiler.java:201)
>   at 
> org.apache.phoenix.compile.JoinCompiler$JoinTableConstructor.visit(JoinCompiler.java:181)
>   at org.apache.phoenix.parse.JoinTableNode.accept(JoinTableNode.java:81)
>   at org.apache.phoenix.compile.JoinCompiler.compile(JoinCompiler.java:138)
>   at 
> org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:190)
>   at org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:153)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:490)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeclareCursorStatement.compilePlan(PhoenixStatement.java:950)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeclareCursorStatement.compilePlan(PhoenixStatement.java:941)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:401)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1825)
>   at 
> com.jsteenkamp.phoenix.PhoenixInnerJoinCursorTest.testCursorQuery(PhoenixInnerJoinCursorTest.java:68)
>   at 
> com.jsteenkamp.phoenix.PhoenixInnerJoinCursorTest.main(PhoenixInnerJoinCursorTest.java:20){noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5072) Cursor Query Loops Eternally with Local Index, Returns Fine Without It

2019-04-26 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-5072:
---

Assignee: (was: Swaroopa Kadam)

> Cursor Query Loops Eternally with Local Index, Returns Fine Without It
> --
>
> Key: PHOENIX-5072
> URL: https://issues.apache.org/jira/browse/PHOENIX-5072
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Jack Steenkamp
>Priority: Major
> Attachments: PhoenixEternalCursorTest.java
>
>
>  
> I have come across a case where a particular cursor query would carry on 
> looping forever if executed when a local index is present. If however, I 
> execute the same query without a local index on the table, then it works as 
> expected.
> You can reproduce this by executing the attached  standalone test case. You 
> only need to modify the JDBC_URL constant (by default it tries to connect to 
> localhost) and then you compare the outputs between hving CREATE_INDEX = true 
> or CREATE_INDEX = false.
> Here is an example of the output: 
> *1) Connect to an environment and create a simple table:*
> {code:java}
> Connecting To : jdbc:phoenix:localhost:63214{code}
> {code:java}
> CREATE TABLE IF NOT EXISTS SOME_NUMBERS
> (
>    ID                             VARCHAR    NOT NULL,
>    NAME                           VARCHAR    ,
>    ANOTHER_VALUE                  VARCHAR    ,
>    TRANSACTION_TIME               TIMESTAMP  ,
>    CONSTRAINT pk PRIMARY KEY(ID)
> ) IMMUTABLE_STORAGE_SCHEME=ONE_CELL_PER_COLUMN,
> UPDATE_CACHE_FREQUENCY=90,
> COLUMN_ENCODED_BYTES=NONE,
> IMMUTABLE_ROWS=true{code}
> *2) Optionally create a local index:*
>  
> If you want to reproduce the failure, create an index:
> {code:java}
> CREATE LOCAL INDEX index_01 ON SOME_NUMBERS(NAME, TRANSACTION_TIME DESC) 
> INCLUDE(ANOTHER_VALUE){code}
> Otherwise, skip this.
> *3) Insert a number of objects and verify their count*
> {code:java}
> System.out.println("\nInserting Some Items");
> DecimalFormat dmf = new DecimalFormat("");
> final String prefix = "ReferenceData.Country/";
> for (int i = 0; i < 5; i++)
> {
>   for (int j = 0; j < 2; j++)
>   {
> PreparedStatement prstmt = conn.prepareStatement("UPSERT INTO 
> SOME_NUMBERS VALUES(?,?,?,?)");
> prstmt.setString(1,UUID.randomUUID().toString());
> prstmt.setString(2,prefix + dmf.format(i));
> prstmt.setString(3,UUID.randomUUID().toString());
> prstmt.setTimestamp(4, new Timestamp(System.currentTimeMillis()));
> prstmt.execute();
> conn.commit();3) Insert a number of objects and verify their count_
> prstmt.close();
>   }
> }{code}
> Verify the count afterwards with: 
> {code:java}
> SELECT COUNT(1) AS TOTAL_ITEMS FROM SOME_NUMBERS {code}
> *5) Run a Cursor Query*
> Run a cursor using the standard sequence of commands as appropriate:
> {code:java}
> DECLARE MyCursor CURSOR FOR SELECT NAME,ANOTHER_VALUE FROM SOME_NUMBERS where 
> NAME like 'ReferenceData.Country/%' ORDER BY TRANSACTION_TIME DESC{code}
> {code:java}
> OPEN MyCursor{code}
> {code:java}
> FETCH NEXT 10 ROWS FROM MyCursor{code}
>  * Without an index it will return the correct number of rows
> {code:java}
> Cursor SQL : DECLARE MyCursor CURSOR FOR SELECT NAME,ANOTHER_VALUE FROM 
> SOME_NUMBERS where NAME like 'ReferenceData.Country/%' ORDER BY 
> TRANSACTION_TIME DESC
> CLOSING THE CURSOR
> Result : 0
> ITEMS returned by count : 10 | Items Returned by Cursor : 10
> ALL GOOD - No Exception{code}
>  * With an index it will return far more than the number of rows (it appears 
> to be erroneously looping for ever - hence the test-case terminates it).
> {code:java}
> Cursor SQL : DECLARE MyCursor CURSOR FOR SELECT NAME,ANOTHER_VALUE FROM 
> SOME_NUMBERS where NAME like 'ReferenceData.Country/%' ORDER BY 
> TRANSACTION_TIME DESC
> ITEMS returned by count : 10 | Items Returned by Cursor : 40
> Aborting the Cursor, as it is more than the count!
> Exception in thread "main" java.lang.RuntimeException: The cursor returned a 
> different number of rows from the count !! {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-1) select only gives results for certain combinations of selected columns when performing join

2019-04-24 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-1:


Assignee: Swaroopa Kadam

> select only gives results for certain combinations of selected columns when 
> performing join
> ---
>
> Key: PHOENIX-1
> URL: https://issues.apache.org/jira/browse/PHOENIX-1
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sergio Esteves
>    Assignee: Swaroopa Kadam
>Priority: Major
>
> I'm using last version from master branch ( 
> 995f508b2a80d9158467c708f35afcf6da4f0cce ) and I've been facing a strange 
> behavior.
> When I run the following queries:
> bq. select * FROM customer inner join address ON customer.c_addr_id = 
> address.addr_id  inner join country ON address.addr_co_id = country.co_id;
> bq. SELECT c_id, c_uname, c_passwd, c_fname, c_lname, c_addr_id, c_phone, 
> c_email, c_since, c_last_login, c_login, c_expiration, c_discount, c_balance, 
> c_ytd_pmt, c_birthdate, c_data, addr_id, addr_street1, addr_street2, 
> addr_city, addr_state, addr_zip, addr_co_id, co_id, co_name, co_exchange, 
> co_currency FROM customer inner join address ON customer.c_addr_id = 
> address.addr_id  inner join country ON address.addr_co_id = country.co_id;
> the resulting table is empty. 
> But if I remove some columns, like this:
> bq. SELECT c_id, c_uname, c_passwd, c_fname, c_lname, c_addr_id, c_phone, 
> c_email, c_since, c_last_login, c_login, c_expiration, c_discount, c_balance, 
> c_ytd_pmt FROM customer inner join address ON customer.c_addr_id = 
> address.addr_id  inner join country ON address.addr_co_id = country.co_id;
> bq. SELECT c_id, c_uname, c_passwd, c_fname, c_lname, c_addr_id, c_phone, 
> c_email, c_since, c_last_login, c_login, c_expiration, c_discount, c_balance, 
> c_ytd_pmt, addr_id FROM customer inner join address ON customer.c_addr_id = 
> address.addr_id  inner join country ON address.addr_co_id = country.co_id;
> the resulting table is not empty anymore, listing all rows correctly. Seems 
> to me that there is some sort of limit on the size (in bytes) that all 
> aggregated values of a row in the result can have.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-1) select only gives results for certain combinations of selected columns when performing join

2019-04-24 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-1:


Assignee: (was: Swaroopa Kadam)

> select only gives results for certain combinations of selected columns when 
> performing join
> ---
>
> Key: PHOENIX-1
> URL: https://issues.apache.org/jira/browse/PHOENIX-1
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sergio Esteves
>Priority: Major
>
> I'm using last version from master branch ( 
> 995f508b2a80d9158467c708f35afcf6da4f0cce ) and I've been facing a strange 
> behavior.
> When I run the following queries:
> bq. select * FROM customer inner join address ON customer.c_addr_id = 
> address.addr_id  inner join country ON address.addr_co_id = country.co_id;
> bq. SELECT c_id, c_uname, c_passwd, c_fname, c_lname, c_addr_id, c_phone, 
> c_email, c_since, c_last_login, c_login, c_expiration, c_discount, c_balance, 
> c_ytd_pmt, c_birthdate, c_data, addr_id, addr_street1, addr_street2, 
> addr_city, addr_state, addr_zip, addr_co_id, co_id, co_name, co_exchange, 
> co_currency FROM customer inner join address ON customer.c_addr_id = 
> address.addr_id  inner join country ON address.addr_co_id = country.co_id;
> the resulting table is empty. 
> But if I remove some columns, like this:
> bq. SELECT c_id, c_uname, c_passwd, c_fname, c_lname, c_addr_id, c_phone, 
> c_email, c_since, c_last_login, c_login, c_expiration, c_discount, c_balance, 
> c_ytd_pmt FROM customer inner join address ON customer.c_addr_id = 
> address.addr_id  inner join country ON address.addr_co_id = country.co_id;
> bq. SELECT c_id, c_uname, c_passwd, c_fname, c_lname, c_addr_id, c_phone, 
> c_email, c_since, c_last_login, c_login, c_expiration, c_discount, c_balance, 
> c_ytd_pmt, addr_id FROM customer inner join address ON customer.c_addr_id = 
> address.addr_id  inner join country ON address.addr_co_id = country.co_id;
> the resulting table is not empty anymore, listing all rows correctly. Seems 
> to me that there is some sort of limit on the size (in bytes) that all 
> aggregated values of a row in the result can have.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-2340) Index creation on multi tenant table causes exception if tenant ID column referenced

2019-04-24 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-2340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-2340:
---

Assignee: Swaroopa Kadam

> Index creation on multi tenant table causes exception if tenant ID column 
> referenced
> 
>
> Key: PHOENIX-2340
> URL: https://issues.apache.org/jira/browse/PHOENIX-2340
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>    Assignee: Swaroopa Kadam
>Priority: Major
>
> If an index is attempted to be created on a multi-tenant table, an error 
> occurs if the tenant ID column is referenced in the indexed columns. This is 
> because it's already automatically included. However, it should not be an 
> error if the user references it (as long as it's the first indexed column).
> To repro:
> {code}
> CREATE TABLE IF NOT EXISTS T (
> ORGANIZATION_ID CHAR(15) NOT NULL,
> NETWORK_ID CHAR(15) NOT NULL,
> SUBJECT_ID CHAR(15) NOT NULL,
> RUN_ID CHAR(15) NOT NULL,
> SCORE DOUBLE,
> TOPIC_ID CHAR(15) NOT NULL
> CONSTRAINT PK PRIMARY KEY (
> ORGANIZATION_ID,
> NETWORK_ID,
> SUBJECT_ID,
> RUN_ID,
> TOPIC_ID
> )
> ) MULTI_TENANT=TRUE;
> CREATE INDEX IDX ON T (
> ORGANIZATION_ID,
> NETWORK_ID,
> TOPIC_ID,
> RUN_ID,
> SCORE
> ) INCLUDE (
> SUBJECT_ID
> );
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5241) Write to table with global index failed if meta of index changed (split, move, etc)

2019-04-23 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-5241:
---

Assignee: (was: Swaroopa Kadam)

> Write to table with global index failed if meta of index changed (split, 
> move, etc)
> ---
>
> Key: PHOENIX-5241
> URL: https://issues.apache.org/jira/browse/PHOENIX-5241
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
> Environment: phoenix-4.14.1-HBase-1.2
>Reporter: cuizhaohua
>Priority: Major
>
> HBase version :1.2.6
> phoenix version: phoenix-4.14.1-HBase-1.2  (download from 
> [http://phoenix.apache.org/download.html])
> phoneinx client version:  phoenix-4.14.1-HBase-1.2   (download from 
> [http://phoenix.apache.org/download.html])
> step 1:
> 0: jdbc:phoenix::/hbase> UPSERT INTO test_meta_change VALUES ('1', 'foo');
>  1 row affected (0.298 seconds)
>  
> setp 2: move index of table  test_meta_change    region 
> hbase(main):008:0> move '0b158edd48c60560c358a3208fee8e24'
>  0 row(s) in 0.0500 seconds
>  
> step 3: get the error
> 0: jdbc:phoenix::/hbase> UPSERT INTO test_meta_change VALUES ('2', 'foo');
>  19/04/15 15:12:29 WARN client.AsyncProcess: #1, table=TEST_META_CHANGE, 
> attempt=1/35 failed=1ops, last exception: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 1121 (XCL21): Write to 
> the index failed. disableIndexOnFailure=true, Failed to write to multiple 
> index tables: [TEST_META_CHANGE_IDX] ,serverTimestamp=1555312349291,
>  at 
> org.apache.phoenix.util.ServerUtil.wrapInDoNotRetryIOException(ServerUtil.java:265)
>  at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:172)
>  at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:161)
>  at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:145)
>  at 
> org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:623)
>  at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:583)
>  at 
> org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:566)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1034)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1705)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1030)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3324)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2823)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:758)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:720)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2168)
>  at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>  at java.lang.Thread.run(Thread.java:745)
>  Caused by: java.sql.SQLException: ERROR 1121 (XCL21): Write to the index 
> failed. disableIndexOnFailure=true, Failed to write to multiple index tables: 
> [TEST_META_CHANGE_IDX]
>  at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
>  at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>  at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:171)
>  ... 22 more
>  Caused by: 
> org.apache.phoenix.hbase.index.exception.MultiIndexWriteFailureException: 
&g

[jira] [Created] (PHOENIX-5256) Remove queryserver related scripts from bin as the former has its own repo

2019-04-22 Thread Swaroopa Kadam (JIRA)
Swaroopa Kadam created PHOENIX-5256:
---

 Summary: Remove queryserver related scripts from bin as the former 
has its own repo
 Key: PHOENIX-5256
 URL: https://issues.apache.org/jira/browse/PHOENIX-5256
 Project: Phoenix
  Issue Type: Improvement
Reporter: Swaroopa Kadam






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5255) Create Orchestrator for QueryServerCanaryTool in phoenix-queryserver project

2019-04-22 Thread Swaroopa Kadam (JIRA)
Swaroopa Kadam created PHOENIX-5255:
---

 Summary: Create Orchestrator for QueryServerCanaryTool in 
phoenix-queryserver project
 Key: PHOENIX-5255
 URL: https://issues.apache.org/jira/browse/PHOENIX-5255
 Project: Phoenix
  Issue Type: Improvement
Reporter: Swaroopa Kadam
Assignee: Swaroopa Kadam






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5251) Avoid taking explicit lock by using AtomicReference in PhoenixAccessController class

2019-04-22 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5251:

Attachment: (was: PHOENIX-5251.4.x-HBase-1.3.v1.patch)

> Avoid taking explicit lock by using AtomicReference in 
> PhoenixAccessController class
> 
>
> Key: PHOENIX-5251
> URL: https://issues.apache.org/jira/browse/PHOENIX-5251
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> By [~elserj] on PHOENIX-5070
> If we want to avoid taking an explicit lock, what about using AtomicReference 
> instead? Can we spin out another Jira issue to fix that?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5251) Avoid taking explicit lock by using AtomicReference in PhoenixAccessController class

2019-04-22 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5251:

Attachment: (was: PHOENIX-5251.4.x-HBase-1.3.v1.patch)

> Avoid taking explicit lock by using AtomicReference in 
> PhoenixAccessController class
> 
>
> Key: PHOENIX-5251
> URL: https://issues.apache.org/jira/browse/PHOENIX-5251
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> By [~elserj] on PHOENIX-5070
> If we want to avoid taking an explicit lock, what about using AtomicReference 
> instead? Can we spin out another Jira issue to fix that?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5251) Avoid taking explicit lock by using AtomicReference in PhoenixAccessController class

2019-04-22 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5251:

Attachment: PHOENIX-5251.4.x-HBase-1.3.v1.patch

> Avoid taking explicit lock by using AtomicReference in 
> PhoenixAccessController class
> 
>
> Key: PHOENIX-5251
> URL: https://issues.apache.org/jira/browse/PHOENIX-5251
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Minor
> Attachments: PHOENIX-5251.4.x-HBase-1.3.v1.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> By [~elserj] on PHOENIX-5070
> If we want to avoid taking an explicit lock, what about using AtomicReference 
> instead? Can we spin out another Jira issue to fix that?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5134) Phoenix Connection Driver #normalize does not distinguish different url with same ZK quorum but different Properties

2019-04-19 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-5134:
---

Assignee: Swaroopa Kadam

> Phoenix Connection Driver #normalize does not distinguish different url with 
> same ZK quorum but different Properties
> 
>
> Key: PHOENIX-5134
> URL: https://issues.apache.org/jira/browse/PHOENIX-5134
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xu Cang
>    Assignee: Swaroopa Kadam
>Priority: Minor
>
> In this code
> https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDriver.java#L228
> Phoenix now uses a cache to maintain Hconnections. The cache's key is 
> generated by 'normalize' method here:
> https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixEmbeddedDriver.java#L312
> The normalize method takes ZK quorum, port, rootNode, principle and keytab 
> into account. But not properties passed in in url. 
> E.g.
> Request to reate one connection by this url: 
> jdbc:phoenix:localhost:61733;TenantId=1
> Request to create another connection by this url
> jdbc:phoenix:localhost:61733;TenantId=2
> Based on logic we have, it will result in one same Hconnection in the 
> connection cache here. 
> This might not be something we really want. 
> For example, different tenant wants to have different HBase config (such as 
> HBase timeout settings) With the same Hconnection returned, tenant2's config 
> will be ignored silently. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5173) LIKE and ILIKE statements return empty result list for search without wildcard

2019-04-19 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5173:

Attachment: (was: PHOENIX-5173.4.x-HBase-1.3.v2.patch)

> LIKE and ILIKE statements return empty result list for search without wildcard
> --
>
> Key: PHOENIX-5173
> URL: https://issues.apache.org/jira/browse/PHOENIX-5173
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Emiliia Nesterovych
>    Assignee: Swaroopa Kadam
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5173.4.x-HBase-1.3.v1.patch, 
> PHOENIX-5173.4.x-HBase-1.3.v2.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> I expect these two statements to return same result, as MySql does:
> {code:java}
> SELECT * FROM my_schema.user WHERE USER_NAME = 'Some Name';
> {code}
> {code:java}
> SELECT * FROM my_schema.user WHERE USER_NAME LIKE 'Some Name';
> {code}
> But while there is data for these scripts, the statement with "LIKE" operator 
> returns empty result set. Same affects "ILIKE" operator. 
>  Create table SQL is:
> {code:java}
> CREATE SCHEMA IF NOT EXISTS my_schema;
> CREATE TABLE my_schema.user (USER_NAME VARCHAR(255), ID BIGINT NOT NULL 
> PRIMARY KEY);{code}
> Fill up query:
> {code:java}
> UPSERT INTO my_schema.user VALUES('Some Name', 1);{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5240) PhoenixResultWritable write wrong column to phoenix

2019-04-19 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-5240:
---

Assignee: Swaroopa Kadam

> PhoenixResultWritable write wrong column to phoenix
> ---
>
> Key: PHOENIX-5240
> URL: https://issues.apache.org/jira/browse/PHOENIX-5240
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: gabry
>    Assignee: Swaroopa Kadam
>Priority: Major
>
> When create hive external table mapped to phoenix table,PhoenixResultWritable 
> write record to phoenix by hive column index which may not match the index of 
> phoenix table. So PhoenixResultWritable write  wrong column to phoenix table 
> ,How can i solve it ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5235) Update SQLline version to the latest

2019-04-19 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5235:

Attachment: PHEONIX-5235.4.x-HBase-1.3.v1.patch

> Update SQLline version to the latest
> 
>
> Key: PHOENIX-5235
> URL: https://issues.apache.org/jira/browse/PHOENIX-5235
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Minor
> Fix For: 4.15.0
>
> Attachments: PHEONIX-5235.4.x-HBase-1.3.v1.patch, 
> PHEONIX-5235.master.v1.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5251) Avoid taking explicit lock by using AtomicReference in PhoenixAccessController class

2019-04-19 Thread Swaroopa Kadam (JIRA)
Swaroopa Kadam created PHOENIX-5251:
---

 Summary: Avoid taking explicit lock by using AtomicReference in 
PhoenixAccessController class
 Key: PHOENIX-5251
 URL: https://issues.apache.org/jira/browse/PHOENIX-5251
 Project: Phoenix
  Issue Type: Improvement
Reporter: Swaroopa Kadam
Assignee: Swaroopa Kadam






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5251) Avoid taking explicit lock by using AtomicReference in PhoenixAccessController class

2019-04-19 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5251:

Description: 
By [~elserj] on PHOENIX-5070
If we want to avoid taking an explicit lock, what about using AtomicReference 
instead? Can we spin out another Jira issue to fix that?

> Avoid taking explicit lock by using AtomicReference in 
> PhoenixAccessController class
> 
>
> Key: PHOENIX-5251
> URL: https://issues.apache.org/jira/browse/PHOENIX-5251
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Minor
>
> By [~elserj] on PHOENIX-5070
> If we want to avoid taking an explicit lock, what about using AtomicReference 
> instead? Can we spin out another Jira issue to fix that?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5237) Support UPPER/LOWER functions in SQL statement

2019-04-17 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam resolved PHOENIX-5237.
-
Resolution: Duplicate

This feature is already available. :) 

> Support UPPER/LOWER functions in SQL statement
> --
>
> Key: PHOENIX-5237
> URL: https://issues.apache.org/jira/browse/PHOENIX-5237
> Project: Phoenix
>  Issue Type: Improvement
>    Reporter: Swaroopa Kadam
>        Assignee: Swaroopa Kadam
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5241) Write to table with global index failed if meta of index changed (split, move, etc)

2019-04-17 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-5241:
---

Assignee: Swaroopa Kadam

> Write to table with global index failed if meta of index changed (split, 
> move, etc)
> ---
>
> Key: PHOENIX-5241
> URL: https://issues.apache.org/jira/browse/PHOENIX-5241
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
> Environment: phoenix-4.14.1-HBase-1.2
>Reporter: cuizhaohua
>Assignee: Swaroopa Kadam
>Priority: Major
>
> HBase version :1.2.6
> phoenix version: phoenix-4.14.1-HBase-1.2  (download from 
> [http://phoenix.apache.org/download.html])
> phoneinx client version:  phoenix-4.14.1-HBase-1.2   (download from 
> [http://phoenix.apache.org/download.html])
> step 1:
> 0: jdbc:phoenix::/hbase> UPSERT INTO test_meta_change VALUES ('1', 'foo');
>  1 row affected (0.298 seconds)
>  
> setp 2: move index of table  test_meta_change    region 
> hbase(main):008:0> move '0b158edd48c60560c358a3208fee8e24'
>  0 row(s) in 0.0500 seconds
>  
> step 3: get the error
> 0: jdbc:phoenix::/hbase> UPSERT INTO test_meta_change VALUES ('2', 'foo');
>  19/04/15 15:12:29 WARN client.AsyncProcess: #1, table=TEST_META_CHANGE, 
> attempt=1/35 failed=1ops, last exception: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 1121 (XCL21): Write to 
> the index failed. disableIndexOnFailure=true, Failed to write to multiple 
> index tables: [TEST_META_CHANGE_IDX] ,serverTimestamp=1555312349291,
>  at 
> org.apache.phoenix.util.ServerUtil.wrapInDoNotRetryIOException(ServerUtil.java:265)
>  at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:172)
>  at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:161)
>  at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:145)
>  at 
> org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:623)
>  at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:583)
>  at 
> org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:566)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1034)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1705)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1030)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3324)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2881)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2823)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:758)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:720)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2168)
>  at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>  at java.lang.Thread.run(Thread.java:745)
>  Caused by: java.sql.SQLException: ERROR 1121 (XCL21): Write to the index 
> failed. disableIndexOnFailure=true, Failed to write to multiple index tables: 
> [TEST_META_CHANGE_IDX]
>  at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
>  at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>  at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:171)
>  ... 22 more
>  Caused by: 
> org.apache.phoenix.hbase.index.exception.Multi

[jira] [Assigned] (PHOENIX-5246) PhoenixAccessControllers.getAccessControllers() method is not correctly implementing the double-checked locking

2019-04-17 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-5246:
---

Assignee: Swaroopa Kadam

> PhoenixAccessControllers.getAccessControllers() method is not correctly 
> implementing the double-checked locking
> ---
>
> Key: PHOENIX-5246
> URL: https://issues.apache.org/jira/browse/PHOENIX-5246
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Thomas D'Silva
>Assignee: Swaroopa Kadam
>Priority: Major
>  Labels: SFDC
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
>
> By [~elserj] on PHOENIX-5070: 
> This looks to me that the getAccessControllers() method is not correctly 
> implementing the double-checked locking "approach" as per 
> https://en.wikipedia.org/wiki/Double-checked_locking#Usage_in_Java (the 
> accessControllers variable must be volatile).
> If we want to avoid taking an explicit lock, what about using AtomicReference 
> instead? Can we spin out another Jira issue to fix that?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5173) LIKE and ILIKE statements return empty result list for search without wildcard

2019-04-16 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5173:

Attachment: (was: PHOENIX-5173.4.x-HBase-1.3.v1.patch)

> LIKE and ILIKE statements return empty result list for search without wildcard
> --
>
> Key: PHOENIX-5173
> URL: https://issues.apache.org/jira/browse/PHOENIX-5173
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Emiliia Nesterovych
>    Assignee: Swaroopa Kadam
>Priority: Blocker
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I expect these two statements to return same result, as MySql does:
> {code:java}
> SELECT * FROM my_schema.user WHERE USER_NAME = 'Some Name';
> {code}
> {code:java}
> SELECT * FROM my_schema.user WHERE USER_NAME LIKE 'Some Name';
> {code}
> But while there is data for these scripts, the statement with "LIKE" operator 
> returns empty result set. Same affects "ILIKE" operator. 
>  Create table SQL is:
> {code:java}
> CREATE SCHEMA IF NOT EXISTS my_schema;
> CREATE TABLE my_schema.user (USER_NAME VARCHAR(255), ID BIGINT NOT NULL 
> PRIMARY KEY);{code}
> Fill up query:
> {code:java}
> UPSERT INTO my_schema.user VALUES('Some Name', 1);{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5203) Update contributing guidelines on Phoenix website

2019-04-15 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5203:

Attachment: (was: PHOENIX-5203.master.v4.patch)

> Update contributing guidelines on Phoenix website
> -
>
> Key: PHOENIX-5203
> URL: https://issues.apache.org/jira/browse/PHOENIX-5203
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Karan Mehta
>        Assignee: Swaroopa Kadam
>Priority: Trivial
> Attachments: PHOENIX-5203.master.v1.patch, 
> PHOENIX-5203.master.v2.patch, PHOENIX-5203.master.v3.patch, 
> PHOENIX-5203.master.v4.patch
>
>
> Add details about patch file naming convention, assumption that patch files 
> only work if it contains single commit in them (for Hadoop QA) and asking 
> users to raise PR's as well along with patch files since it makes it easier 
> for others to review. This will help improve onboarding.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5243) PhoenixResultSet#next() closes the result set if scanner returns null

2019-04-15 Thread Swaroopa Kadam (JIRA)
Swaroopa Kadam created PHOENIX-5243:
---

 Summary: PhoenixResultSet#next() closes the result set if scanner 
returns null
 Key: PHOENIX-5243
 URL: https://issues.apache.org/jira/browse/PHOENIX-5243
 Project: Phoenix
  Issue Type: Bug
Reporter: Swaroopa Kadam
Assignee: Swaroopa Kadam
 Fix For: 4.14.2






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5237) Support UPPER/LOWER functions in SQL statement

2019-04-10 Thread Swaroopa Kadam (JIRA)
Swaroopa Kadam created PHOENIX-5237:
---

 Summary: Support UPPER/LOWER functions in SQL statement
 Key: PHOENIX-5237
 URL: https://issues.apache.org/jira/browse/PHOENIX-5237
 Project: Phoenix
  Issue Type: Improvement
Reporter: Swaroopa Kadam
Assignee: Swaroopa Kadam






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5203) Update contributing guidelines on Phoenix website

2019-04-09 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5203:

Attachment: (was: PHOENIX-5203.master.v2.patch)

> Update contributing guidelines on Phoenix website
> -
>
> Key: PHOENIX-5203
> URL: https://issues.apache.org/jira/browse/PHOENIX-5203
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Karan Mehta
>        Assignee: Swaroopa Kadam
>Priority: Major
> Attachments: PHOENIX-5203.master.v1.patch, 
> PHOENIX-5203.master.v2.patch
>
>
> Add details about patch file naming convention, assumption that patch files 
> only work if it contains single commit in them (for Hadoop QA) and asking 
> users to raise PR's as well along with patch files since it makes it easier 
> for others to review. This will help improve onboarding.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5235) Update SQLline version to the latest

2019-04-09 Thread Swaroopa Kadam (JIRA)
Swaroopa Kadam created PHOENIX-5235:
---

 Summary: Update SQLline version to the latest
 Key: PHOENIX-5235
 URL: https://issues.apache.org/jira/browse/PHOENIX-5235
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.14.0
Reporter: Swaroopa Kadam
Assignee: Swaroopa Kadam
 Fix For: 4.15.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5204) Breaking UpsertCompiler#compile function into small readable functions

2019-03-20 Thread Swaroopa Kadam (JIRA)
Swaroopa Kadam created PHOENIX-5204:
---

 Summary: Breaking UpsertCompiler#compile function into small 
readable functions
 Key: PHOENIX-5204
 URL: https://issues.apache.org/jira/browse/PHOENIX-5204
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.14.1
Reporter: Swaroopa Kadam
Assignee: Swaroopa Kadam


Currently, UpsertCompile#compile method is ~500 lines and does multiple things 
listed below:
 # Upsert Select
 # Upsert Select (Client/Server side)
 # Upsert Values

It would be better and cleaner to break the method into logically smaller and 
more readable methods. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5174) Spin up mini cluster for queryserver canary tool tests

2019-03-19 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam resolved PHOENIX-5174.
-
Resolution: Resolved

> Spin up mini cluster for queryserver canary tool tests
> --
>
> Key: PHOENIX-5174
> URL: https://issues.apache.org/jira/browse/PHOENIX-5174
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.1
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Minor
> Fix For: 4.14.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-660) Upsert never returns when the base table or the index table is disabled

2019-03-16 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-660:
--

Assignee: (was: Swaroopa Kadam)

> Upsert never returns when the base table or the index table is disabled
> ---
>
> Key: PHOENIX-660
> URL: https://issues.apache.org/jira/browse/PHOENIX-660
> Project: Phoenix
>  Issue Type: Task
>Reporter: Samarth Jain
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5172) Harden queryserver canary tool with retries and effective logging

2019-03-16 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5172:

Attachment: (was: PHOENIX-5172-v2.patch)

> Harden queryserver canary tool with retries and effective logging
> -
>
> Key: PHOENIX-5172
> URL: https://issues.apache.org/jira/browse/PHOENIX-5172
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.1
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Minor
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> # Add retry logic in getting connection url
>  # Remove assigning schema_name to null 
>  # Add more logging



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5172) Harden queryserver canary tool with retries and effective logging

2019-03-16 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5172:

Attachment: (was: PHOENIX-5172-4.x-1.3.patch)

> Harden queryserver canary tool with retries and effective logging
> -
>
> Key: PHOENIX-5172
> URL: https://issues.apache.org/jira/browse/PHOENIX-5172
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.1
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Minor
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> # Add retry logic in getting connection url
>  # Remove assigning schema_name to null 
>  # Add more logging



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (PHOENIX-5174) Spin up mini cluster for queryserver canary tool tests

2019-03-07 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reopened PHOENIX-5174:
-

> Spin up mini cluster for queryserver canary tool tests
> --
>
> Key: PHOENIX-5174
> URL: https://issues.apache.org/jira/browse/PHOENIX-5174
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.1
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Minor
> Fix For: 4.14.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5173) LIKE and ILIKE statements return empty result list for search without wildcard

2019-03-07 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-5173:
---

Assignee: Swaroopa Kadam

> LIKE and ILIKE statements return empty result list for search without wildcard
> --
>
> Key: PHOENIX-5173
> URL: https://issues.apache.org/jira/browse/PHOENIX-5173
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Emiliia Nesterovych
>    Assignee: Swaroopa Kadam
>Priority: Blocker
>
> I expect these two statements to return same result, as MySql does:
> {code:java}
> SELECT * FROM my_schema.user WHERE USER_NAME = 'Some Name';
> {code}
> {code:java}
> SELECT * FROM my_schema.user WHERE USER_NAME LIKE 'Some Name';
> {code}
> But while there is data for these scripts, the statement with "LIKE" operator 
> returns empty result set. Same affects "ILIKE" operator. 
>  Create table SQL is:
> {code:java}
>  CREATE TABLE my_schema.user (USER_NAME VARCHAR(255));{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-660) Upsert never returns when the base table or the index table is disabled

2019-03-07 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-660:
--

Assignee: Swaroopa Kadam

> Upsert never returns when the base table or the index table is disabled
> ---
>
> Key: PHOENIX-660
> URL: https://issues.apache.org/jira/browse/PHOENIX-660
> Project: Phoenix
>  Issue Type: Task
>Reporter: Samarth Jain
>        Assignee: Swaroopa Kadam
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-675) Support specifying index details at the time of CREATE TABLE query

2019-03-07 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-675:
--

Assignee: Swaroopa Kadam

> Support specifying index details at the time of CREATE TABLE query
> --
>
> Key: PHOENIX-675
> URL: https://issues.apache.org/jira/browse/PHOENIX-675
> Project: Phoenix
>  Issue Type: Task
>Reporter: chrajeshbabu
>        Assignee: Swaroopa Kadam
>
> We can support specifying index details during table creation as well(which 
> is supported in some databases). This also helps in Hindex integration where 
> we can avoid unnecessary disable and enable of table every time while 
> creating index.
> Ex:
> CREATE TABLE test (
> id INT NOT NULL,
> last_name  CHAR(30) NOT NULL,
> first_name CHAR(30) NOT NULL,
> PRIMARY KEY (id),
> INDEX name (last_name,first_name)
> );



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5172) Harden queryserver canary tool with retries and effective logging

2019-03-07 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5172:

Attachment: (was: PHOENIX-5172.patch-v1)

> Harden queryserver canary tool with retries and effective logging
> -
>
> Key: PHOENIX-5172
> URL: https://issues.apache.org/jira/browse/PHOENIX-5172
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.1
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Minor
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5172-v2.patch
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> # Add retry logic in getting connection url
>  # Remove assigning schema_name to null 
>  # Add more logging



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5172) Harden queryserver canary tool with retries and effective logging

2019-03-07 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5172:

Attachment: (was: PHOENIX-5172.patch)

> Harden queryserver canary tool with retries and effective logging
> -
>
> Key: PHOENIX-5172
> URL: https://issues.apache.org/jira/browse/PHOENIX-5172
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.1
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Minor
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5172-v2.patch
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> # Add retry logic in getting connection url
>  # Remove assigning schema_name to null 
>  # Add more logging



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5174) Spin up mini cluster for queryserver canary tool tests

2019-03-06 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam resolved PHOENIX-5174.
-
Resolution: Not A Problem

> Spin up mini cluster for queryserver canary tool tests
> --
>
> Key: PHOENIX-5174
> URL: https://issues.apache.org/jira/browse/PHOENIX-5174
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.1
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Minor
> Fix For: 4.14.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5172) Harden queryserver canary tool with retries and effective logging

2019-03-06 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5172:

Attachment: (was: PHOENIX-5172-v2.patch)

> Harden queryserver canary tool with retries and effective logging
> -
>
> Key: PHOENIX-5172
> URL: https://issues.apache.org/jira/browse/PHOENIX-5172
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.1
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Minor
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5172.patch, PHOENIX-5172.patch-v1
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> # Add retry logic in getting connection url
>  # Remove assigning schema_name to null 
>  # Add more logging



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5177) Update PQS documentation for PhoenixCanaryTool

2019-03-05 Thread Swaroopa Kadam (JIRA)
Swaroopa Kadam created PHOENIX-5177:
---

 Summary: Update PQS documentation for PhoenixCanaryTool
 Key: PHOENIX-5177
 URL: https://issues.apache.org/jira/browse/PHOENIX-5177
 Project: Phoenix
  Issue Type: Improvement
Reporter: Swaroopa Kadam
Assignee: Swaroopa Kadam


Add details about how to use the Canary Tool. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5172) Harden queryserver canary tool with retries and effective logging

2019-03-05 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-5172:

Fix Version/s: 4.15.0

> Harden queryserver canary tool with retries and effective logging
> -
>
> Key: PHOENIX-5172
> URL: https://issues.apache.org/jira/browse/PHOENIX-5172
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.1
>        Reporter: Swaroopa Kadam
>    Assignee: Swaroopa Kadam
>Priority: Minor
> Fix For: 4.15.0, 4.14.2
>
> Attachments: PHOENIX-5172.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> # Add retry logic in getting connection url
>  # Remove assigning schema_name to null 
>  # Add more logging



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5174) Spin up mini cluster for queryserver canary tool tests

2019-02-28 Thread Swaroopa Kadam (JIRA)
Swaroopa Kadam created PHOENIX-5174:
---

 Summary: Spin up mini cluster for queryserver canary tool tests
 Key: PHOENIX-5174
 URL: https://issues.apache.org/jira/browse/PHOENIX-5174
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.13.1
Reporter: Swaroopa Kadam
Assignee: Swaroopa Kadam
 Fix For: 4.14.0






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5172) Harden queryserver canary tool with retries and effective logging

2019-02-27 Thread Swaroopa Kadam (JIRA)
Swaroopa Kadam created PHOENIX-5172:
---

 Summary: Harden queryserver canary tool with retries and effective 
logging
 Key: PHOENIX-5172
 URL: https://issues.apache.org/jira/browse/PHOENIX-5172
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.13.1
Reporter: Swaroopa Kadam
Assignee: Swaroopa Kadam
 Fix For: 4.14.2


# Add retry logic in getting connection url
 # Remove assigning schema_name to null 
 # Add more logging



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5087) Inner Join Cursor Query fails with NullPointerException - JoinCompiler.java:187

2019-01-03 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-5087:
---

Assignee: Swaroopa Kadam

> Inner Join Cursor Query fails with NullPointerException - 
> JoinCompiler.java:187
> ---
>
> Key: PHOENIX-5087
> URL: https://issues.apache.org/jira/browse/PHOENIX-5087
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Jack Steenkamp
>    Assignee: Swaroopa Kadam
>Priority: Major
> Attachments: PhoenixInnerJoinCursorTest.java
>
>
> I have come across an inner join query in my application that fails with the 
> NullPointerException if executed as part of a Cursor, but executes fine if 
> done without it. 
>  
> To reproduce this issue, you can run the attached program (assuming you 
> update the JDBC_URL to point to an instance you have running) or you can 
> follow the steps below:
>  
> Create the Table:
>  
> CREATE TABLE IF NOT EXISTS MY_STATS
> ( 
>    ID                   VARCHAR    NOT NULL,
>    ENTRY_NAME                     VARCHAR    ,
>    ENTRY_VALUE           DOUBLE     ,
>    TRANSACTION_TIME               TIMESTAMP  ,
>    CONSTRAINT pk PRIMARY KEY(ID)
> ) 
> IMMUTABLE_STORAGE_SCHEME=ONE_CELL_PER_COLUMN,
> UPDATE_CACHE_FREQUENCY=90,
> COLUMN_ENCODED_BYTES=NONE,
> IMMUTABLE_ROWS=true
>  
> Execute a normal query (this works fine):
>  
> SELECT * FROM MY_STATS
>    INNER JOIN 
>    (
>     SELECT ENTRY_NAME, MAX(TRANSACTION_TIME) AS TRANSACTION_TIME 
>   FROM MY_STATS 
>      GROUP BY ENTRY_NAME
>    ) sub
>    ON MY_STATS.ENTRY_NAME = sub.ENTRY_NAME AND MY_STATS.TRANSACTION_TIME = 
> sub.TRANSACTION_TIME 
> ORDER BY MY_STATS.TRANSACTION_TIME DESC 
>  
> Now if you execute the same query, but with the cursor declaration at the top 
> - 
>  
> DECLARE MyCursor CURSOR FOR 
>  
> It produces the following exception:
>  
> Exception in thread "main" java.lang.NullPointerException
>  at 
> org.apache.phoenix.compile.JoinCompiler$JoinTableConstructor.resolveTable(JoinCompiler.java:187)
>  at 
> org.apache.phoenix.compile.JoinCompiler$JoinTableConstructor.visit(JoinCompiler.java:224)
>  at 
> org.apache.phoenix.compile.JoinCompiler$JoinTableConstructor.visit(JoinCompiler.java:181)
>  at org.apache.phoenix.parse.DerivedTableNode.accept(DerivedTableNode.java:49)
>  at 
> org.apache.phoenix.compile.JoinCompiler$JoinTableConstructor.visit(JoinCompiler.java:201)
>  at 
> org.apache.phoenix.compile.JoinCompiler$JoinTableConstructor.visit(JoinCompiler.java:181)
>  at org.apache.phoenix.parse.JoinTableNode.accept(JoinTableNode.java:81)
>  at org.apache.phoenix.compile.JoinCompiler.compile(JoinCompiler.java:138)
>  at 
> org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:190)
>  at org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:153)
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:490)
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeclareCursorStatement.compilePlan(PhoenixStatement.java:950)
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeclareCursorStatement.compilePlan(PhoenixStatement.java:941)
>  at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:401)
>  at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
>  at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1825)
>  at 
> com.jsteenkamp.phoenix.PhoenixInnerJoinCursorTest.testCursorQuery(PhoenixInnerJoinCursorTest.java:68)
>  at 
> com.jsteenkamp.phoenix.PhoenixInnerJoinCursorTest.main(PhoenixInnerJoinCursorTest.java:20)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5072) Cursor Query Loops Eternally with Local Index, Returns Fine Without It

2018-12-18 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-5072:
---

Assignee: Swaroopa Kadam

> Cursor Query Loops Eternally with Local Index, Returns Fine Without It
> --
>
> Key: PHOENIX-5072
> URL: https://issues.apache.org/jira/browse/PHOENIX-5072
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Jack Steenkamp
>    Assignee: Swaroopa Kadam
>Priority: Major
> Attachments: PhoenixEternalCursorTest.java
>
>
>  
> I have come across a case where a particular cursor query would carry on 
> looping forever if executed when a local index is present. If however, I 
> execute the same query without a local index on the table, then it works as 
> expected.
> You can reproduce this by executing the attached  standalone test case. You 
> only need to modify the JDBC_URL constant (by default it tries to connect to 
> localhost) and then you compare the outputs between hving CREATE_INDEX = true 
> or CREATE_INDEX = false.
> Here is an example of the output: 
> *1) Connect to an environment and create a simple table:*
> {code:java}
> Connecting To : jdbc:phoenix:localhost:63214{code}
> {code:java}
> CREATE TABLE IF NOT EXISTS SOME_NUMBERS
> (
>    ID                             VARCHAR    NOT NULL,
>    NAME                           VARCHAR    ,
>    ANOTHER_VALUE                  VARCHAR    ,
>    TRANSACTION_TIME               TIMESTAMP  ,
>    CONSTRAINT pk PRIMARY KEY(ID)
> ) IMMUTABLE_STORAGE_SCHEME=ONE_CELL_PER_COLUMN,
> UPDATE_CACHE_FREQUENCY=90,
> COLUMN_ENCODED_BYTES=NONE,
> IMMUTABLE_ROWS=true{code}
> *2) Optionally create a local index:*
>  
> If you want to reproduce the failure, create an index:
> {code:java}
> CREATE LOCAL INDEX index_01 ON SOME_NUMBERS(NAME, TRANSACTION_TIME DESC) 
> INCLUDE(ANOTHER_VALUE){code}
> Otherwise, skip this.
> *3) Insert a number of objects and verify their count*
> {code:java}
> System.out.println("\nInserting Some Items");
> DecimalFormat dmf = new DecimalFormat("");
> final String prefix = "ReferenceData.Country/";
> for (int i = 0; i < 5; i++)
> {
>   for (int j = 0; j < 2; j++)
>   {
> PreparedStatement prstmt = conn.prepareStatement("UPSERT INTO 
> SOME_NUMBERS VALUES(?,?,?,?)");
> prstmt.setString(1,UUID.randomUUID().toString());
> prstmt.setString(2,prefix + dmf.format(i));
> prstmt.setString(3,UUID.randomUUID().toString());
> prstmt.setTimestamp(4, new Timestamp(System.currentTimeMillis()));
> prstmt.execute();
> conn.commit();3) Insert a number of objects and verify their count_
> prstmt.close();
>   }
> }{code}
> Verify the count afterwards with: 
> {code:java}
> SELECT COUNT(1) AS TOTAL_ITEMS FROM SOME_NUMBERS {code}
> *5) Run a Cursor Query*
> Run a cursor using the standard sequence of commands as appropriate:
> {code:java}
> DECLARE MyCursor CURSOR FOR SELECT NAME,ANOTHER_VALUE FROM SOME_NUMBERS where 
> NAME like 'ReferenceData.Country/%' ORDER BY TRANSACTION_TIME DESC{code}
> {code:java}
> OPEN MyCursor{code}
> {code:java}
> FETCH NEXT 10 ROWS FROM MyCursor{code}
>  * Without an index it will return the correct number of rows
> {code:java}
> Cursor SQL : DECLARE MyCursor CURSOR FOR SELECT NAME,ANOTHER_VALUE FROM 
> SOME_NUMBERS where NAME like 'ReferenceData.Country/%' ORDER BY 
> TRANSACTION_TIME DESC
> CLOSING THE CURSOR
> Result : 0
> ITEMS returned by count : 10 | Items Returned by Cursor : 10
> ALL GOOD - No Exception{code}
>  * With an index it will return far more than the number of rows (it appears 
> to be erroneously looping for ever - hence the test-case terminates it).
> {code:java}
> Cursor SQL : DECLARE MyCursor CURSOR FOR SELECT NAME,ANOTHER_VALUE FROM 
> SOME_NUMBERS where NAME like 'ReferenceData.Country/%' ORDER BY 
> TRANSACTION_TIME DESC
> ITEMS returned by count : 10 | Items Returned by Cursor : 40
> Aborting the Cursor, as it is more than the count!
> Exception in thread "main" java.lang.RuntimeException: The cursor returned a 
> different number of rows from the count !! {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4983) Allow using a connection with a SCN set to write data to tables EXCEPT transactional tables or mutable tables with indexes or tables with a ROW_TIMESTAMP column

2018-12-14 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-4983:

Attachment: PHOENIX-4983-missing-header.patch

> Allow using a connection with a SCN set to write data to tables EXCEPT 
> transactional tables or mutable tables with indexes or tables with a 
> ROW_TIMESTAMP column
> 
>
> Key: PHOENIX-4983
> URL: https://issues.apache.org/jira/browse/PHOENIX-4983
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Thomas D'Silva
>Assignee: Swaroopa Kadam
>Priority: Major
>  Labels: SFDC
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4983-4.x-HBase-1.4.patch, 
> PHOENIX-4983-4.x-HBase-1.4.patch, PHOENIX-4983-4.x-HBase-1.4.patch, 
> PHOENIX-4983-4.x-HBase-1.4.patch, PHOENIX-4983-missing-header.patch
>
>
> Currently If a SCN is set on a connection it is read-only. We only need to 
> prevent a client from using a connection with a SCN set to upsert data for:
> 1) transactional tables 
> 2) mutable tables with indexes 
> 3) tables with a ROW_TIMESTAMP column



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4983) Allow using a connection with a SCN set to write data to tables EXCEPT transactional tables or mutable tables with indexes or tables with a ROW_TIMESTAMP column

2018-12-14 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-4983:

Attachment: (was: PHOENIX-4983-missing-header.patch)

> Allow using a connection with a SCN set to write data to tables EXCEPT 
> transactional tables or mutable tables with indexes or tables with a 
> ROW_TIMESTAMP column
> 
>
> Key: PHOENIX-4983
> URL: https://issues.apache.org/jira/browse/PHOENIX-4983
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Thomas D'Silva
>Assignee: Swaroopa Kadam
>Priority: Major
>  Labels: SFDC
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4983-4.x-HBase-1.4.patch, 
> PHOENIX-4983-4.x-HBase-1.4.patch, PHOENIX-4983-4.x-HBase-1.4.patch, 
> PHOENIX-4983-4.x-HBase-1.4.patch, PHOENIX-4983-missing-header.patch
>
>
> Currently If a SCN is set on a connection it is read-only. We only need to 
> prevent a client from using a connection with a SCN set to upsert data for:
> 1) transactional tables 
> 2) mutable tables with indexes 
> 3) tables with a ROW_TIMESTAMP column



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4832) Add Canary Test Tool for Phoenix Query Server

2018-11-30 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-4832:
---

Assignee: Swaroopa Kadam

> Add Canary Test Tool for Phoenix Query Server
> -
>
> Key: PHOENIX-4832
> URL: https://issues.apache.org/jira/browse/PHOENIX-4832
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Ashutosh Parekh
>        Assignee: Swaroopa Kadam
>Priority: Minor
> Attachments: PHOENIX-4832.patch
>
>
> A suggested improvement is to add a Canary Test tool to the Phoenix Query 
> Server. It will execute a set of Basic Tests (CRUD) against a PQS end-point 
> and report on the proper functioning and testing results. A configurable Log 
> Sink can help to publish the results as required.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4983) Allow using a connection with a SCN set to write data to tables EXCEPT transactional tables or mutable tables with indexes or tables with a ROW_TIMESTAMP column

2018-11-27 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-4983:

Attachment: PHOENIX-4983-4.x-HBase-1.4.patch

> Allow using a connection with a SCN set to write data to tables EXCEPT 
> transactional tables or mutable tables with indexes or tables with a 
> ROW_TIMESTAMP column
> 
>
> Key: PHOENIX-4983
> URL: https://issues.apache.org/jira/browse/PHOENIX-4983
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Thomas D'Silva
>Assignee: Swaroopa Kadam
>Priority: Major
>  Labels: SFDC
> Attachments: PHOENIX-4983-4.x-HBase-1.4.patch
>
>
> Currently If a SCN is set on a connection it is read-only. We only need to 
> prevent a client from using a connection with a SCN set to upsert data for:
> 1) transactional tables 
> 2) mutable tables with indexes 
> 3) tables with a ROW_TIMESTAMP column



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4983) Allow using a connection with a SCN set to write data to tables EXCEPT transactional tables or mutable tables with indexes

2018-11-19 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-4983:

Description: Currently If a SCN set on a connection it is read-only. We 
only need to prevent a client from setting the timestamp for transactional 
tables or mutable tables with global and local indexes.  (was: Currently If a 
SCN set on a connection it is read-only. We only need to prevent a client from 
setting the timestamp for transactional tables or mutable tables with global 
indexes.)

> Allow using a connection with a SCN set to write data to tables EXCEPT 
> transactional tables or mutable tables with indexes
> --
>
> Key: PHOENIX-4983
> URL: https://issues.apache.org/jira/browse/PHOENIX-4983
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Thomas D'Silva
>    Assignee: Swaroopa Kadam
>Priority: Major
>  Labels: SFDC
>
> Currently If a SCN set on a connection it is read-only. We only need to 
> prevent a client from setting the timestamp for transactional tables or 
> mutable tables with global and local indexes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4983) Allow using a connection with a SCN set to write data to tables EXCEPT transactional tables or mutable tables with indexes

2018-11-08 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-4983:

Summary: Allow using a connection with a SCN set to write data to tables 
EXCEPT transactional tables or mutable tables with indexes  (was: Allow using a 
connection with a SCN set to write data to tables EXCEPT transactional tables 
or mutable tables with global indexes)

> Allow using a connection with a SCN set to write data to tables EXCEPT 
> transactional tables or mutable tables with indexes
> --
>
> Key: PHOENIX-4983
> URL: https://issues.apache.org/jira/browse/PHOENIX-4983
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Thomas D'Silva
>    Assignee: Swaroopa Kadam
>Priority: Major
>  Labels: SFDC
>
> Currently If a SCN set on a connection it is read-only. We only need to 
> prevent a client from setting the timestamp for transactional tables or 
> mutable tables with global indexes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4872) BulkLoad has bug when loading on single-cell-array-with-offsets table.

2018-10-29 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-4872:

Attachment: (was: PHOENIX-4872-master.patch)

> BulkLoad has bug when loading on single-cell-array-with-offsets table.
> --
>
> Key: PHOENIX-4872
> URL: https://issues.apache.org/jira/browse/PHOENIX-4872
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0, 4.12.0, 4.13.0, 4.14.0
>Reporter: JeongMin Ju
>    Assignee: Swaroopa Kadam
>Priority: Critical
> Attachments: PHOENIX-4872-4.x-HBase-1.4.patch, 
> PHOENIX-4872-master.patch
>
>
> CsvBulkLoadTool creates incorrect data for the 
> SCAWO(SingleCellArrayWithOffsets) table.
> Every phoenix table needs a marker (empty) column, but CsvBulkLoadTool does 
> not create that column for SCAWO tables.
> If you check the data through HBase Shell, you can see that there is no 
> corresponding column.
>  If created by Upsert Query, it is created normally.
> {code:java}
> column=0:\x00\x00\x00\x00, timestamp=1535420036372, value=x
> {code}
> Since there is no upper column, the result of all Group By queries is zero.
> This is because "families":
> {"0": ["\\ x00 \\ x00 \\ x00 \\ x00"]}
> is added to the column of the Scan object.
> Because the CsvBulkLoadTool has not created the column, the result of the 
> scan is empty.
>  
> This problem applies only to tables with multiple column families. The 
> single-column family table works luckily.
> "Families": \{"0": ["ALL"]} is added to the column of the Scan object in the 
> single column family table. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4872) BulkLoad has bug when loading on single-cell-array-with-offsets table.

2018-10-29 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-4872:

Attachment: (was: PHOENIX-4872-4.x-HBase-1.4.patch)

> BulkLoad has bug when loading on single-cell-array-with-offsets table.
> --
>
> Key: PHOENIX-4872
> URL: https://issues.apache.org/jira/browse/PHOENIX-4872
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0, 4.12.0, 4.13.0, 4.14.0
>Reporter: JeongMin Ju
>    Assignee: Swaroopa Kadam
>Priority: Critical
> Attachments: PHOENIX-4872-4.x-HBase-1.4.patch, 
> PHOENIX-4872-master.patch
>
>
> CsvBulkLoadTool creates incorrect data for the 
> SCAWO(SingleCellArrayWithOffsets) table.
> Every phoenix table needs a marker (empty) column, but CsvBulkLoadTool does 
> not create that column for SCAWO tables.
> If you check the data through HBase Shell, you can see that there is no 
> corresponding column.
>  If created by Upsert Query, it is created normally.
> {code:java}
> column=0:\x00\x00\x00\x00, timestamp=1535420036372, value=x
> {code}
> Since there is no upper column, the result of all Group By queries is zero.
> This is because "families":
> {"0": ["\\ x00 \\ x00 \\ x00 \\ x00"]}
> is added to the column of the Scan object.
> Because the CsvBulkLoadTool has not created the column, the result of the 
> scan is empty.
>  
> This problem applies only to tables with multiple column families. The 
> single-column family table works luckily.
> "Families": \{"0": ["ALL"]} is added to the column of the Scan object in the 
> single column family table. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-300) Support TRUNCATE TABLE

2018-10-29 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-300:
---
Summary: Support TRUNCATE TABLE  (was: Suppot TRUNCATE TABLE)

> Support TRUNCATE TABLE
> --
>
> Key: PHOENIX-300
> URL: https://issues.apache.org/jira/browse/PHOENIX-300
> Project: Phoenix
>  Issue Type: Task
>Reporter: Raymond Liu
>
> Though for hbase, it might just be a disable, drop then recreate approaching. 
> While it will be convenient for user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4872) BulkLoad has bug when loading on single-cell-array-with-offsets table.

2018-10-26 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-4872:

Attachment: PHOENIX-4872-master.patch

> BulkLoad has bug when loading on single-cell-array-with-offsets table.
> --
>
> Key: PHOENIX-4872
> URL: https://issues.apache.org/jira/browse/PHOENIX-4872
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0, 4.12.0, 4.13.0, 4.14.0
>Reporter: JeongMin Ju
>    Assignee: Swaroopa Kadam
>Priority: Critical
> Attachments: PHOENIX-4872-4.x-HBase-1.4.patch, 
> PHOENIX-4872-master.patch
>
>
> CsvBulkLoadTool creates incorrect data for the 
> SCAWO(SingleCellArrayWithOffsets) table.
> Every phoenix table needs a marker (empty) column, but CsvBulkLoadTool does 
> not create that column for SCAWO tables.
> If you check the data through HBase Shell, you can see that there is no 
> corresponding column.
>  If created by Upsert Query, it is created normally.
> {code:java}
> column=0:\x00\x00\x00\x00, timestamp=1535420036372, value=x
> {code}
> Since there is no upper column, the result of all Group By queries is zero.
> This is because "families":
> {"0": ["\\ x00 \\ x00 \\ x00 \\ x00"]}
> is added to the column of the Scan object.
> Because the CsvBulkLoadTool has not created the column, the result of the 
> scan is empty.
>  
> This problem applies only to tables with multiple column families. The 
> single-column family table works luckily.
> "Families": \{"0": ["ALL"]} is added to the column of the Scan object in the 
> single column family table. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


<    1   2   3   4   5   >