[jira] [Created] (PHOENIX-4878) Remove SharedTableState and replace with PTable

2018-08-28 Thread Thomas D'Silva (JIRA)
Thomas D'Silva created PHOENIX-4878:
---

 Summary: Remove SharedTableState and replace with PTable
 Key: PHOENIX-4878
 URL: https://issues.apache.org/jira/browse/PHOENIX-4878
 Project: Phoenix
  Issue Type: Improvement
Reporter: Thomas D'Silva


When we drop a column from a base table we also drop view indexes that require 
the column. This information is passed back to the client using the 
SharedTableState proto. Convert this to use our regular PTable proto.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4872) BulkLoad has bug when loading on single-cell-array-with-offsets table.

2018-08-28 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam reassigned PHOENIX-4872:
---

Assignee: Swaroopa Kadam

> BulkLoad has bug when loading on single-cell-array-with-offsets table.
> --
>
> Key: PHOENIX-4872
> URL: https://issues.apache.org/jira/browse/PHOENIX-4872
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0, 4.12.0, 4.13.0, 4.14.0
>Reporter: JeongMin Ju
>Assignee: Swaroopa Kadam
>Priority: Critical
>
> CsvBulkLoadTool creates incorrect data for the 
> SCAWO(SingleCellArrayWithOffsets) table.
> Every phoenix table needs a marker (empty) column, but CsvBulkLoadTool does 
> not create that column for SCAWO tables.
> If you check the data through HBase Shell, you can see that there is no 
> corresponding column.
>  If created by Upsert Query, it is created normally.
> {code:java}
> column=0:\x00\x00\x00\x00, timestamp=1535420036372, value=x
> {code}
> Since there is no upper column, the result of all Group By queries is zero.
> This is because "families":
> {"0": ["\\ x00 \\ x00 \\ x00 \\ x00"]}
> is added to the column of the Scan object.
> Because the CsvBulkLoadTool has not created the column, the result of the 
> scan is empty.
>  
> This problem applies only to tables with multiple column families. The 
> single-column family table works luckily.
> "Families": \{"0": ["ALL"]} is added to the column of the Scan object in the 
> single column family table. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3178) Row count incorrect for UPSERT SELECT when auto commit is false

2018-08-28 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-3178:

Attachment: (was: PHOENIX-3178-4.x-HBase-1.4.patch)

> Row count incorrect for UPSERT SELECT when auto commit is false
> ---
>
> Key: PHOENIX-3178
> URL: https://issues.apache.org/jira/browse/PHOENIX-3178
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Swaroopa Kadam
>Priority: Major
>  Labels: newbie
> Fix For: 4.15.0
>
> Attachments: PHOENIX-3178-4.x-HBase-1.4.patch, PHOENIX-3178.patch
>
>
> To reproduce, use the following test:
> {code:java}
> @Test
> public void testRowCountWithNoAutoCommitOnUpsertSelect() throws Exception 
> {
> Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
> props.setProperty(QueryServices.MUTATE_BATCH_SIZE_ATTRIB, 
> Integer.toString(3));
> props.setProperty(QueryServices.SCAN_CACHE_SIZE_ATTRIB, 
> Integer.toString(3));
> props.setProperty(QueryServices.SCAN_RESULT_CHUNK_SIZE, 
> Integer.toString(3));
> Connection conn = DriverManager.getConnection(getUrl(), props);
> conn.setAutoCommit(false);
> conn.createStatement().execute("CREATE SEQUENCE keys");
> String tableName = generateRandomString();
> conn.createStatement().execute(
> "CREATE TABLE " + tableName + " (pk INTEGER PRIMARY KEY, val 
> INTEGER)");
> conn.createStatement().execute(
> "UPSERT INTO " + tableName + " VALUES (NEXT VALUE FOR keys,1)");
> conn.commit();
> for (int i=0; i<6; i++) {
> Statement stmt = conn.createStatement();
> int upsertCount = stmt.executeUpdate(
> "UPSERT INTO " + tableName + " SELECT NEXT VALUE FOR keys, 
> val FROM " + tableName);
> conn.commit();
> assertEquals((int)Math.pow(2, i), upsertCount);
> }
> conn.close();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3178) Row count incorrect for UPSERT SELECT when auto commit is false

2018-08-28 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-3178:

Attachment: (was: PHOENIX-3178.patch)

> Row count incorrect for UPSERT SELECT when auto commit is false
> ---
>
> Key: PHOENIX-3178
> URL: https://issues.apache.org/jira/browse/PHOENIX-3178
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Swaroopa Kadam
>Priority: Major
>  Labels: newbie
> Fix For: 4.15.0
>
> Attachments: PHOENIX-3178-4.x-HBase-1.4.patch, PHOENIX-3178.patch
>
>
> To reproduce, use the following test:
> {code:java}
> @Test
> public void testRowCountWithNoAutoCommitOnUpsertSelect() throws Exception 
> {
> Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
> props.setProperty(QueryServices.MUTATE_BATCH_SIZE_ATTRIB, 
> Integer.toString(3));
> props.setProperty(QueryServices.SCAN_CACHE_SIZE_ATTRIB, 
> Integer.toString(3));
> props.setProperty(QueryServices.SCAN_RESULT_CHUNK_SIZE, 
> Integer.toString(3));
> Connection conn = DriverManager.getConnection(getUrl(), props);
> conn.setAutoCommit(false);
> conn.createStatement().execute("CREATE SEQUENCE keys");
> String tableName = generateRandomString();
> conn.createStatement().execute(
> "CREATE TABLE " + tableName + " (pk INTEGER PRIMARY KEY, val 
> INTEGER)");
> conn.createStatement().execute(
> "UPSERT INTO " + tableName + " VALUES (NEXT VALUE FOR keys,1)");
> conn.commit();
> for (int i=0; i<6; i++) {
> Statement stmt = conn.createStatement();
> int upsertCount = stmt.executeUpdate(
> "UPSERT INTO " + tableName + " SELECT NEXT VALUE FOR keys, 
> val FROM " + tableName);
> conn.commit();
> assertEquals((int)Math.pow(2, i), upsertCount);
> }
> conn.close();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4870) PQS metrics are not being logged when AutoCommit is set to true in LoggingPhoenixConnection

2018-08-28 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-4870:

Attachment: (was: PHOENIX-4870.patch)

> PQS metrics are not being logged when AutoCommit is set to true in 
> LoggingPhoenixConnection
> ---
>
> Key: PHOENIX-4870
> URL: https://issues.apache.org/jira/browse/PHOENIX-4870
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Major
> Attachments: PHOENIX-4870-4.x-HBase-1.4.patch, PHOENIX-4870.patch
>
>
> In PQS, when LoggingPhoenixConnection calls commit or close, metrics logs are 
> written properly, however, when LoggingPhoenixConnection is explicitly set 
> with AutoCommit as true, metrics don't get logged at all. This bug can also 
> be tested by adding the following test scenario in 
> PhoenixLoggingMetricsIT.java class. 
> {code:java}
> @Test
> public void testPhoenixMetricsLoggedOnAutoCommit() throws Exception {
> // Autocommit is turned on explicitly
> loggedConn.setAutoCommit(true);
> //with executeUpdate() method
> // run SELECT to verify read metrics are logged
> String query = "SELECT * FROM " + tableName1;
> verifyQueryLevelMetricsLogging(query);
> // run UPSERT SELECT to verify mutation metrics are logged
> String upsertSelect = "UPSERT INTO " + tableName2 + " SELECT * FROM " + 
> tableName1;
> loggedConn.createStatement().executeUpdate(upsertSelect);
> // Autocommit is turned on explicitly
> // Hence mutation metrics are expected during implicit commit
> assertTrue("Mutation write metrics are not logged for " + tableName2,
> mutationWriteMetricsMap.size()  > 0);
> assertTrue("Mutation read metrics for not found for " + tableName1,
> mutationReadMetricsMap.get(tableName1).size() > 0);
> //with execute() method
> loggedConn.createStatement().execute(upsertSelect);
> // Autocommit is turned on explicitly
> // Hence mutation metrics are expected during implicit commit
> assertTrue("Mutation write metrics are not logged for " + tableName2,
> mutationWriteMetricsMap.size()  > 0);
> assertTrue("Mutation read metrics for not found for " + tableName1,
> mutationReadMetricsMap.get(tableName1).size() > 0);
> clearAllTestMetricMaps();
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4870) PQS metrics are not being logged when AutoCommit is set to true in LoggingPhoenixConnection

2018-08-28 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-4870:

Attachment: (was: PHOENIX-4870-4.x-HBase-1.4.patch)

> PQS metrics are not being logged when AutoCommit is set to true in 
> LoggingPhoenixConnection
> ---
>
> Key: PHOENIX-4870
> URL: https://issues.apache.org/jira/browse/PHOENIX-4870
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Major
> Attachments: PHOENIX-4870-4.x-HBase-1.4.patch, PHOENIX-4870.patch
>
>
> In PQS, when LoggingPhoenixConnection calls commit or close, metrics logs are 
> written properly, however, when LoggingPhoenixConnection is explicitly set 
> with AutoCommit as true, metrics don't get logged at all. This bug can also 
> be tested by adding the following test scenario in 
> PhoenixLoggingMetricsIT.java class. 
> {code:java}
> @Test
> public void testPhoenixMetricsLoggedOnAutoCommit() throws Exception {
> // Autocommit is turned on explicitly
> loggedConn.setAutoCommit(true);
> //with executeUpdate() method
> // run SELECT to verify read metrics are logged
> String query = "SELECT * FROM " + tableName1;
> verifyQueryLevelMetricsLogging(query);
> // run UPSERT SELECT to verify mutation metrics are logged
> String upsertSelect = "UPSERT INTO " + tableName2 + " SELECT * FROM " + 
> tableName1;
> loggedConn.createStatement().executeUpdate(upsertSelect);
> // Autocommit is turned on explicitly
> // Hence mutation metrics are expected during implicit commit
> assertTrue("Mutation write metrics are not logged for " + tableName2,
> mutationWriteMetricsMap.size()  > 0);
> assertTrue("Mutation read metrics for not found for " + tableName1,
> mutationReadMetricsMap.get(tableName1).size() > 0);
> //with execute() method
> loggedConn.createStatement().execute(upsertSelect);
> // Autocommit is turned on explicitly
> // Hence mutation metrics are expected during implicit commit
> assertTrue("Mutation write metrics are not logged for " + tableName2,
> mutationWriteMetricsMap.size()  > 0);
> assertTrue("Mutation read metrics for not found for " + tableName1,
> mutationReadMetricsMap.get(tableName1).size() > 0);
> clearAllTestMetricMaps();
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4870) PQS metrics are not being logged when AutoCommit is set to true in LoggingPhoenixConnection

2018-08-28 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-4870:

Attachment: PHOENIX-4870-4.x-HBase-1.4.patch
PHOENIX-4870.patch

> PQS metrics are not being logged when AutoCommit is set to true in 
> LoggingPhoenixConnection
> ---
>
> Key: PHOENIX-4870
> URL: https://issues.apache.org/jira/browse/PHOENIX-4870
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Major
> Attachments: PHOENIX-4870-4.x-HBase-1.4.patch, PHOENIX-4870.patch
>
>
> In PQS, when LoggingPhoenixConnection calls commit or close, metrics logs are 
> written properly, however, when LoggingPhoenixConnection is explicitly set 
> with AutoCommit as true, metrics don't get logged at all. This bug can also 
> be tested by adding the following test scenario in 
> PhoenixLoggingMetricsIT.java class. 
> {code:java}
> @Test
> public void testPhoenixMetricsLoggedOnAutoCommit() throws Exception {
> // Autocommit is turned on explicitly
> loggedConn.setAutoCommit(true);
> //with executeUpdate() method
> // run SELECT to verify read metrics are logged
> String query = "SELECT * FROM " + tableName1;
> verifyQueryLevelMetricsLogging(query);
> // run UPSERT SELECT to verify mutation metrics are logged
> String upsertSelect = "UPSERT INTO " + tableName2 + " SELECT * FROM " + 
> tableName1;
> loggedConn.createStatement().executeUpdate(upsertSelect);
> // Autocommit is turned on explicitly
> // Hence mutation metrics are expected during implicit commit
> assertTrue("Mutation write metrics are not logged for " + tableName2,
> mutationWriteMetricsMap.size()  > 0);
> assertTrue("Mutation read metrics for not found for " + tableName1,
> mutationReadMetricsMap.get(tableName1).size() > 0);
> //with execute() method
> loggedConn.createStatement().execute(upsertSelect);
> // Autocommit is turned on explicitly
> // Hence mutation metrics are expected during implicit commit
> assertTrue("Mutation write metrics are not logged for " + tableName2,
> mutationWriteMetricsMap.size()  > 0);
> assertTrue("Mutation read metrics for not found for " + tableName1,
> mutationReadMetricsMap.get(tableName1).size() > 0);
> clearAllTestMetricMaps();
> }
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4717) Document restrictions on adding primary key columns

2018-08-28 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4717:

Summary: Document restrictions on adding primary key columns  (was: 
Document when a primary column is allowed to be added)

> Document restrictions on adding primary key columns
> ---
>
> Key: PHOENIX-4717
> URL: https://issues.apache.org/jira/browse/PHOENIX-4717
> Project: Phoenix
>  Issue Type: Task
>Reporter: Thomas D'Silva
>Priority: Major
>
> For both views and tables
> 1. We allow nullable columns in the PK, but only if they're variable length. 
> Variable length types may be null, since we use a null-byte terminator (which 
> is a disallowed character in variable length types). Fixed width types do not 
> have a way of representing null.
> There is the following TODO in PColumnImpl.init 
> // TODO: we may be able to allow this for columns at the end of the PK
> 2. We disallow adding a column to the PK 
>a) if the last PK column is VARBINARY 
>b) if the last PK column is fixed width and nullable //not sure if this is 
> possible currently because of #1
>c) if the column is not nullable (in order to handle existing rows)
> For views:
> 1. We disallow adding a column to the PK If the last pk column is variable 
> length
> If the last pk column is variable length then we read all the bytes of the 
> rowkey without looking for a separator byte so we cannot add a pk column to a 
> view if the last pk column of the parent is variable length (see 
> https://issues.apache.org/jira/browse/PHOENIX-978?focusedCommentId=14617847=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-14617847)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4717) Document when a primary column is allowed to be added

2018-08-28 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-4717:

Description: 
For both views and tables
1. We allow nullable columns in the PK, but only if they're variable length. 
Variable length types may be null, since we use a null-byte terminator (which 
is a disallowed character in variable length types). Fixed width types do not 
have a way of representing null.
There is the following TODO in PColumnImpl.init 
// TODO: we may be able to allow this for columns at the end of the PK

2. We disallow adding a column to the PK 
   a) if the last PK column is VARBINARY 
   b) if the last PK column is fixed width and nullable //not sure if this is 
possible currently because of #1
   c) if the column is not nullable (in order to handle existing rows)

For views:
1. We disallow adding a column to the PK If the last pk column is variable 
length
If the last pk column is variable length then we read all the bytes of the 
rowkey without looking for a separator byte so we cannot add a pk column to a 
view if the last pk column of the parent is variable length (see 
https://issues.apache.org/jira/browse/PHOENIX-978?focusedCommentId=14617847=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-14617847)






  was:We disallow adding a column to the PK if the last PK column is VARBINARY 
or if the last PK column is fixed width and nullable. 


> Document when a primary column is allowed to be added
> -
>
> Key: PHOENIX-4717
> URL: https://issues.apache.org/jira/browse/PHOENIX-4717
> Project: Phoenix
>  Issue Type: Task
>Reporter: Thomas D'Silva
>Priority: Major
>
> For both views and tables
> 1. We allow nullable columns in the PK, but only if they're variable length. 
> Variable length types may be null, since we use a null-byte terminator (which 
> is a disallowed character in variable length types). Fixed width types do not 
> have a way of representing null.
> There is the following TODO in PColumnImpl.init 
> // TODO: we may be able to allow this for columns at the end of the PK
> 2. We disallow adding a column to the PK 
>a) if the last PK column is VARBINARY 
>b) if the last PK column is fixed width and nullable //not sure if this is 
> possible currently because of #1
>c) if the column is not nullable (in order to handle existing rows)
> For views:
> 1. We disallow adding a column to the PK If the last pk column is variable 
> length
> If the last pk column is variable length then we read all the bytes of the 
> rowkey without looking for a separator byte so we cannot add a pk column to a 
> view if the last pk column of the parent is variable length (see 
> https://issues.apache.org/jira/browse/PHOENIX-978?focusedCommentId=14617847=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-14617847)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4877) Consider Adding Developer Documentation for Phoenix Query Execution

2018-08-28 Thread Daniel Wong (JIRA)
Daniel Wong created PHOENIX-4877:


 Summary: Consider Adding Developer Documentation for Phoenix Query 
Execution
 Key: PHOENIX-4877
 URL: https://issues.apache.org/jira/browse/PHOENIX-4877
 Project: Phoenix
  Issue Type: Improvement
Reporter: Daniel Wong
Assignee: Daniel Wong
 Attachments: Developer's Guide to the Phoenix Query Lifecycle_ SQL to 
CompilableStatement.pdf, phoenix_query_images.tar.gz

As part of my research into Phoenix Query Execution Lifecycle I made a document 
I think the community may want to add to the project.  I've attached a draft 
document and related images.  I'm considering adding a "Developer" section to 
the resources Apache Phoenix page and adding a link to the document.  Also 
looking for feedback on the document.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3178) Row count incorrect for UPSERT SELECT when auto commit is false

2018-08-28 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-3178:

Attachment: PHOENIX-3178-4.x-HBase-1.4.patch
PHOENIX-3178.patch

> Row count incorrect for UPSERT SELECT when auto commit is false
> ---
>
> Key: PHOENIX-3178
> URL: https://issues.apache.org/jira/browse/PHOENIX-3178
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Swaroopa Kadam
>Priority: Major
>  Labels: newbie
> Fix For: 4.15.0
>
> Attachments: PHOENIX-3178-4.x-HBase-1.4.patch, PHOENIX-3178.patch
>
>
> To reproduce, use the following test:
> {code:java}
> @Test
> public void testRowCountWithNoAutoCommitOnUpsertSelect() throws Exception 
> {
> Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
> props.setProperty(QueryServices.MUTATE_BATCH_SIZE_ATTRIB, 
> Integer.toString(3));
> props.setProperty(QueryServices.SCAN_CACHE_SIZE_ATTRIB, 
> Integer.toString(3));
> props.setProperty(QueryServices.SCAN_RESULT_CHUNK_SIZE, 
> Integer.toString(3));
> Connection conn = DriverManager.getConnection(getUrl(), props);
> conn.setAutoCommit(false);
> conn.createStatement().execute("CREATE SEQUENCE keys");
> String tableName = generateRandomString();
> conn.createStatement().execute(
> "CREATE TABLE " + tableName + " (pk INTEGER PRIMARY KEY, val 
> INTEGER)");
> conn.createStatement().execute(
> "UPSERT INTO " + tableName + " VALUES (NEXT VALUE FOR keys,1)");
> conn.commit();
> for (int i=0; i<6; i++) {
> Statement stmt = conn.createStatement();
> int upsertCount = stmt.executeUpdate(
> "UPSERT INTO " + tableName + " SELECT NEXT VALUE FOR keys, 
> val FROM " + tableName);
> conn.commit();
> assertEquals((int)Math.pow(2, i), upsertCount);
> }
> conn.close();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3178) Row count incorrect for UPSERT SELECT when auto commit is false

2018-08-28 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-3178:

Attachment: (was: PHOENIX-3178.patch)

> Row count incorrect for UPSERT SELECT when auto commit is false
> ---
>
> Key: PHOENIX-3178
> URL: https://issues.apache.org/jira/browse/PHOENIX-3178
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Swaroopa Kadam
>Priority: Major
>  Labels: newbie
> Fix For: 4.15.0
>
>
> To reproduce, use the following test:
> {code:java}
> @Test
> public void testRowCountWithNoAutoCommitOnUpsertSelect() throws Exception 
> {
> Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
> props.setProperty(QueryServices.MUTATE_BATCH_SIZE_ATTRIB, 
> Integer.toString(3));
> props.setProperty(QueryServices.SCAN_CACHE_SIZE_ATTRIB, 
> Integer.toString(3));
> props.setProperty(QueryServices.SCAN_RESULT_CHUNK_SIZE, 
> Integer.toString(3));
> Connection conn = DriverManager.getConnection(getUrl(), props);
> conn.setAutoCommit(false);
> conn.createStatement().execute("CREATE SEQUENCE keys");
> String tableName = generateRandomString();
> conn.createStatement().execute(
> "CREATE TABLE " + tableName + " (pk INTEGER PRIMARY KEY, val 
> INTEGER)");
> conn.createStatement().execute(
> "UPSERT INTO " + tableName + " VALUES (NEXT VALUE FOR keys,1)");
> conn.commit();
> for (int i=0; i<6; i++) {
> Statement stmt = conn.createStatement();
> int upsertCount = stmt.executeUpdate(
> "UPSERT INTO " + tableName + " SELECT NEXT VALUE FOR keys, 
> val FROM " + tableName);
> conn.commit();
> assertEquals((int)Math.pow(2, i), upsertCount);
> }
> conn.close();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3178) Row count incorrect for UPSERT SELECT when auto commit is false

2018-08-28 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-3178:

Attachment: (was: PHOENIX-3178-master.patch)

> Row count incorrect for UPSERT SELECT when auto commit is false
> ---
>
> Key: PHOENIX-3178
> URL: https://issues.apache.org/jira/browse/PHOENIX-3178
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Swaroopa Kadam
>Priority: Major
>  Labels: newbie
> Fix For: 4.15.0
>
>
> To reproduce, use the following test:
> {code:java}
> @Test
> public void testRowCountWithNoAutoCommitOnUpsertSelect() throws Exception 
> {
> Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
> props.setProperty(QueryServices.MUTATE_BATCH_SIZE_ATTRIB, 
> Integer.toString(3));
> props.setProperty(QueryServices.SCAN_CACHE_SIZE_ATTRIB, 
> Integer.toString(3));
> props.setProperty(QueryServices.SCAN_RESULT_CHUNK_SIZE, 
> Integer.toString(3));
> Connection conn = DriverManager.getConnection(getUrl(), props);
> conn.setAutoCommit(false);
> conn.createStatement().execute("CREATE SEQUENCE keys");
> String tableName = generateRandomString();
> conn.createStatement().execute(
> "CREATE TABLE " + tableName + " (pk INTEGER PRIMARY KEY, val 
> INTEGER)");
> conn.createStatement().execute(
> "UPSERT INTO " + tableName + " VALUES (NEXT VALUE FOR keys,1)");
> conn.commit();
> for (int i=0; i<6; i++) {
> Statement stmt = conn.createStatement();
> int upsertCount = stmt.executeUpdate(
> "UPSERT INTO " + tableName + " SELECT NEXT VALUE FOR keys, 
> val FROM " + tableName);
> conn.commit();
> assertEquals((int)Math.pow(2, i), upsertCount);
> }
> conn.close();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3178) Row count incorrect for UPSERT SELECT when auto commit is false

2018-08-28 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-3178:

Attachment: PHOENIX-3178.patch
PHOENIX-3178-master.patch

> Row count incorrect for UPSERT SELECT when auto commit is false
> ---
>
> Key: PHOENIX-3178
> URL: https://issues.apache.org/jira/browse/PHOENIX-3178
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Swaroopa Kadam
>Priority: Major
>  Labels: newbie
> Fix For: 4.15.0
>
> Attachments: PHOENIX-3178-master.patch, PHOENIX-3178.patch
>
>
> To reproduce, use the following test:
> {code:java}
> @Test
> public void testRowCountWithNoAutoCommitOnUpsertSelect() throws Exception 
> {
> Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
> props.setProperty(QueryServices.MUTATE_BATCH_SIZE_ATTRIB, 
> Integer.toString(3));
> props.setProperty(QueryServices.SCAN_CACHE_SIZE_ATTRIB, 
> Integer.toString(3));
> props.setProperty(QueryServices.SCAN_RESULT_CHUNK_SIZE, 
> Integer.toString(3));
> Connection conn = DriverManager.getConnection(getUrl(), props);
> conn.setAutoCommit(false);
> conn.createStatement().execute("CREATE SEQUENCE keys");
> String tableName = generateRandomString();
> conn.createStatement().execute(
> "CREATE TABLE " + tableName + " (pk INTEGER PRIMARY KEY, val 
> INTEGER)");
> conn.createStatement().execute(
> "UPSERT INTO " + tableName + " VALUES (NEXT VALUE FOR keys,1)");
> conn.commit();
> for (int i=0; i<6; i++) {
> Statement stmt = conn.createStatement();
> int upsertCount = stmt.executeUpdate(
> "UPSERT INTO " + tableName + " SELECT NEXT VALUE FOR keys, 
> val FROM " + tableName);
> conn.commit();
> assertEquals((int)Math.pow(2, i), upsertCount);
> }
> conn.close();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3178) Row count incorrect for UPSERT SELECT when auto commit is false

2018-08-28 Thread Swaroopa Kadam (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swaroopa Kadam updated PHOENIX-3178:

Attachment: (was: PHOENIX-3178.patch)

> Row count incorrect for UPSERT SELECT when auto commit is false
> ---
>
> Key: PHOENIX-3178
> URL: https://issues.apache.org/jira/browse/PHOENIX-3178
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Swaroopa Kadam
>Priority: Major
>  Labels: newbie
> Fix For: 4.15.0
>
> Attachments: PHOENIX-3178-master.patch, PHOENIX-3178.patch
>
>
> To reproduce, use the following test:
> {code:java}
> @Test
> public void testRowCountWithNoAutoCommitOnUpsertSelect() throws Exception 
> {
> Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
> props.setProperty(QueryServices.MUTATE_BATCH_SIZE_ATTRIB, 
> Integer.toString(3));
> props.setProperty(QueryServices.SCAN_CACHE_SIZE_ATTRIB, 
> Integer.toString(3));
> props.setProperty(QueryServices.SCAN_RESULT_CHUNK_SIZE, 
> Integer.toString(3));
> Connection conn = DriverManager.getConnection(getUrl(), props);
> conn.setAutoCommit(false);
> conn.createStatement().execute("CREATE SEQUENCE keys");
> String tableName = generateRandomString();
> conn.createStatement().execute(
> "CREATE TABLE " + tableName + " (pk INTEGER PRIMARY KEY, val 
> INTEGER)");
> conn.createStatement().execute(
> "UPSERT INTO " + tableName + " VALUES (NEXT VALUE FOR keys,1)");
> conn.commit();
> for (int i=0; i<6; i++) {
> Statement stmt = conn.createStatement();
> int upsertCount = stmt.executeUpdate(
> "UPSERT INTO " + tableName + " SELECT NEXT VALUE FOR keys, 
> val FROM " + tableName);
> conn.commit();
> assertEquals((int)Math.pow(2, i), upsertCount);
> }
> conn.close();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] Suggestions for Phoenix from HBaseCon Asia notes

2018-08-28 Thread Andrew Purtell
On Tue, Aug 28, 2018 at 2:01 PM James Taylor  wrote:

> Glad to hear this was discussed at HBaseCon. The most common request I've
> seen asked for is to be able to write Phoenix-compatible data from other,
> non-Phoenix services/projects, mainly because row-by-row updates (even when
> batched) can be a bottleneck. This is not feasible by using low level
> constructs because of all the features provided by Phoenix: secondary
> indexes, composite row keys, encoded columns, storage formats, salting,
> ascending/descending row keys, array support, etc. The most feasible way to
> accomplish writes outside of Phoenix is to use UPSERT VALUES followed by
> PhoenixRuntime#getUncommittedDataIterator to get the Cells that would be
> committed (followed by rolling back the uncommitted data). This maintains
> Phoenix's abstract and minimizes any overhead (the cost of parsing is
> negligible). You can control the frequency of how often the schema is
> pulled over from the server through the UPDATE_CACHE_FREQUENCY declaration.
>
> I haven't seen much demand for bypassing Phoenix JDBC on the read side. If
> you don't want to use Phoenix to query, what's the point in using it?
>

You might have Phoenix clients and HBase clients sharing common data
sources, for whatever reason, we cannot assume what constraints or legacy
issues may present themselves in a given Phoenix or HBase user's
environment. Agree though as a question of prioritization maybe it doesn't
get done until a volunteer does it to scratch a real itch, but at that
point it could be useful to accept the contribution.


> As far as Calicte/Phoenix, it'd be great to see this work picked up. I
> don't think this solves the API problem, though. I good home for this
> adapter would be Apache Drill IMHO. They're up to a new enough version of
> Calcite (and off of their fork) so that this would be feasible and would
> provide immediate benefits on the query side.
>
> Thanks,
> James
>
> On Tue, Aug 28, 2018 at 1:38 PM Andrew Purtell 
> wrote:
>
> > On Mon, Aug 27, 2018 at 11:03 AM Josh Elser  wrote:
> >
> > > 2. Can Phoenix be the de-facto schema for SQL on HBase?
> > >
> > > We've long asserted "if you have to ask how Phoenix serializes data,
> you
> > > shouldn't be do it" (a nod that you have to write lots of code). What
> if
> > > we turn that on its head? Could we extract our PDataType serialization,
> > > composite row-key, column encoding, etc into a minimal API that folks
> > > with their own itches can use?
> > >
> > > With the growing integrations into Phoenix, we could embrace them by
> > > providing an API to make what they're doing easier. In the same vein,
> we
> > > cement ourselves as a cornerstone of doing it "correctly"
> > >
> >
> > There have been discussion where I work where it seems this would be a
> > great idea. If data types, row key constructors, and other key and data
> > serialization concerns were a public API, these could be used by
> connectors
> > to Spark or other systems to generate and consume Phoenix compatible
> data.
> > It improves the integration story all around.
> >
> > Another thought for refactoring I've heard is exposing an API for
> > generating query plans without needing the SQL parser. A public API  for
> > programmatically building query plans could used by connectors to Spark
> or
> > other systems when pushing down parts of a parallelized or federated
> query
> > to Phoenix data sources, avoiding unnecessary hacking SQL language
> > generation, string mangling, or (re)parsing overheads. This kind of
> > describes Calcite's raison d'être. If Phoenix is not embedding Calcite as
> > query planner, as it does not currently, it is independently useful to
> have
> > a public API for programmatic query plan construction given the current
> > implementation regardless. If Phoenix were to embed Calcite as query
> > planner, you'd probably get a ton of re-use among internal and external
> > users of the Calcite APIs. I'd think whatever option you might choose
> would
> > be informed by the suitability (or not) of embedding Calcite as Phoenix's
> > query planner, and how soon that might be expected to be feature
> complete.
> > For what it's worth. Again this extends possibilities for integration.
> >
> >
> > > 3. Better recommendations to users to not attempt certain queries.
> > >
> > > We definitively know that there are certain types of queries that
> > > Phoenix cannot support well (compared to optimal Phoenix use-cases).
> > > Users very commonly fall into such pitfalls on their own and this
> leaves
> > > a bad taste in their mouth (thinking that the product "stinks").
> > >
> > > Can we do a better job of telling the user when and why it happened?
> > > What would such a user-interaction model look like? Can we supplement
> > > the "why" with instructions of what to do differently (even if in the
> > > abstract)?
> > >
> > > 4. Phoenix-Calcite
> > >
> > > This was mentioned as a "nice to have". From what I understand, 

Re: [DISCUSS] Suggestions for Phoenix from HBaseCon Asia notes

2018-08-28 Thread James Taylor
Glad to hear this was discussed at HBaseCon. The most common request I've
seen asked for is to be able to write Phoenix-compatible data from other,
non-Phoenix services/projects, mainly because row-by-row updates (even when
batched) can be a bottleneck. This is not feasible by using low level
constructs because of all the features provided by Phoenix: secondary
indexes, composite row keys, encoded columns, storage formats, salting,
ascending/descending row keys, array support, etc. The most feasible way to
accomplish writes outside of Phoenix is to use UPSERT VALUES followed by
PhoenixRuntime#getUncommittedDataIterator to get the Cells that would be
committed (followed by rolling back the uncommitted data). This maintains
Phoenix's abstract and minimizes any overhead (the cost of parsing is
negligible). You can control the frequency of how often the schema is
pulled over from the server through the UPDATE_CACHE_FREQUENCY declaration.

I haven't seen much demand for bypassing Phoenix JDBC on the read side. If
you don't want to use Phoenix to query, what's the point in using it?

As far as Calicte/Phoenix, it'd be great to see this work picked up. I
don't think this solves the API problem, though. I good home for this
adapter would be Apache Drill IMHO. They're up to a new enough version of
Calcite (and off of their fork) so that this would be feasible and would
provide immediate benefits on the query side.

Thanks,
James

On Tue, Aug 28, 2018 at 1:38 PM Andrew Purtell  wrote:

> On Mon, Aug 27, 2018 at 11:03 AM Josh Elser  wrote:
>
> > 2. Can Phoenix be the de-facto schema for SQL on HBase?
> >
> > We've long asserted "if you have to ask how Phoenix serializes data, you
> > shouldn't be do it" (a nod that you have to write lots of code). What if
> > we turn that on its head? Could we extract our PDataType serialization,
> > composite row-key, column encoding, etc into a minimal API that folks
> > with their own itches can use?
> >
> > With the growing integrations into Phoenix, we could embrace them by
> > providing an API to make what they're doing easier. In the same vein, we
> > cement ourselves as a cornerstone of doing it "correctly"
> >
>
> There have been discussion where I work where it seems this would be a
> great idea. If data types, row key constructors, and other key and data
> serialization concerns were a public API, these could be used by connectors
> to Spark or other systems to generate and consume Phoenix compatible data.
> It improves the integration story all around.
>
> Another thought for refactoring I've heard is exposing an API for
> generating query plans without needing the SQL parser. A public API  for
> programmatically building query plans could used by connectors to Spark or
> other systems when pushing down parts of a parallelized or federated query
> to Phoenix data sources, avoiding unnecessary hacking SQL language
> generation, string mangling, or (re)parsing overheads. This kind of
> describes Calcite's raison d'être. If Phoenix is not embedding Calcite as
> query planner, as it does not currently, it is independently useful to have
> a public API for programmatic query plan construction given the current
> implementation regardless. If Phoenix were to embed Calcite as query
> planner, you'd probably get a ton of re-use among internal and external
> users of the Calcite APIs. I'd think whatever option you might choose would
> be informed by the suitability (or not) of embedding Calcite as Phoenix's
> query planner, and how soon that might be expected to be feature complete.
> For what it's worth. Again this extends possibilities for integration.
>
>
> > 3. Better recommendations to users to not attempt certain queries.
> >
> > We definitively know that there are certain types of queries that
> > Phoenix cannot support well (compared to optimal Phoenix use-cases).
> > Users very commonly fall into such pitfalls on their own and this leaves
> > a bad taste in their mouth (thinking that the product "stinks").
> >
> > Can we do a better job of telling the user when and why it happened?
> > What would such a user-interaction model look like? Can we supplement
> > the "why" with instructions of what to do differently (even if in the
> > abstract)?
> >
> > 4. Phoenix-Calcite
> >
> > This was mentioned as a "nice to have". From what I understand, there
> > was nothing explicitly from with the implementation or approach, just
> > that it was a massive undertaking to continue with little immediate
> > gain. Would this be a boon for us to try to continue in some form? Are
> > there steps we can take that would help push us along the right path?
> >
> > Anyways, I'd love to hear everyone's thoughts. While the concerns were
> > raised at HBaseCon Asia, the suggestions that accompany them here are
> > largely mine ;). Feel free to break them out into their own threads if
> > you think that would be better (or say that you disagree with me --
> > that's cool too)!
> >
> > - Josh
> >
>

[jira] [Resolved] (PHOENIX-2114) Implement PTable.getParentName() correctly for views

2018-08-28 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-2114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva resolved PHOENIX-2114.
-
Resolution: Fixed

> Implement PTable.getParentName() correctly for views
> 
>
> Key: PHOENIX-2114
> URL: https://issues.apache.org/jira/browse/PHOENIX-2114
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Thomas D'Silva
>Priority: Major
>
> calling getParentName() on a view returns null.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] EXPLAIN'ing what we do well (was Re: [DISCUSS] Suggestions for Phoenix from HBaseCon Asia notes)

2018-08-28 Thread James Taylor
Thomas' idea is a good one. From the EXPLAIN plan ResultSet, you can
directly get an estimate of the number of bytes that will be scanned. Take
a look at this [1] documentation. We need to implement PHOENIX-4735 too (so
that things are setup well out-of-the-box). We could have a kind of
guardrail config property that would define the max allowed bytes allowed
to be read and fail a query that goes over this limit. That would cover 80%
of the issues IMHO. Other guardrail config properties could cover other
corner cases.

[1] http://phoenix.apache.org/explainplan.html

On Mon, Aug 27, 2018 at 3:01 PM Josh Elser  wrote:

> On 8/27/18 5:03 PM, Thomas D'Silva wrote:
> >> 3. Better recommendations to users to not attempt certain queries.
> >>
> >> We definitively know that there are certain types of queries that
> Phoenix
> >> cannot support well (compared to optimal Phoenix use-cases). Users very
> >> commonly fall into such pitfalls on their own and this leaves a bad
> taste
> >> in their mouth (thinking that the product "stinks").
> >>
> >> Can we do a better job of telling the user when and why it happened?
> What
> >> would such a user-interaction model look like? Can we supplement the
> "why"
> >> with instructions of what to do differently (even if in the abstract)?
> >>
> > Providing relevant feedback before/after a query is run in general is
> very
> > hard to do. If stats are enabled we have an estimate of how many
> rows/bytes
> > will be scanned.
> > We could have an optional feature that prevent users from running queries
> > if the rows/bytes scanned are above a certain threshold. We should also
> > enhance our explain
> > plan documentationhttp://phoenix.apache.org/explainplan.html  with
> example
> > of queries so users know what kinds of queries Phoenix handles well.
>
> Breaking this out..
>
> Totally agree -- this is by no means "easy". I struggle very often
> trying to express just _why_ a query that someone is running in Phoenix
> doesn't run as well as they think it should.
>
> Centralizing on the EXPLAIN plan is good. Making sure it's
> consumable/thorough is probably the lowest hanging fruit. If we can give
> concrete examples to the kinds of explain plans a user might see, I
> think that might get use from users/admins.
>
> Throwing a random idea out there: with stats and the query plan, can we
> give a thumbs-up/thumbs-down? If we can, is that useful?
>


Re: [DISCUSS] Suggestions for Phoenix from HBaseCon Asia notes

2018-08-28 Thread Andrew Purtell
On Mon, Aug 27, 2018 at 11:03 AM Josh Elser  wrote:

> 2. Can Phoenix be the de-facto schema for SQL on HBase?
>
> We've long asserted "if you have to ask how Phoenix serializes data, you
> shouldn't be do it" (a nod that you have to write lots of code). What if
> we turn that on its head? Could we extract our PDataType serialization,
> composite row-key, column encoding, etc into a minimal API that folks
> with their own itches can use?
>
> With the growing integrations into Phoenix, we could embrace them by
> providing an API to make what they're doing easier. In the same vein, we
> cement ourselves as a cornerstone of doing it "correctly"
>

There have been discussion where I work where it seems this would be a
great idea. If data types, row key constructors, and other key and data
serialization concerns were a public API, these could be used by connectors
to Spark or other systems to generate and consume Phoenix compatible data.
It improves the integration story all around.

Another thought for refactoring I've heard is exposing an API for
generating query plans without needing the SQL parser. A public API  for
programmatically building query plans could used by connectors to Spark or
other systems when pushing down parts of a parallelized or federated query
to Phoenix data sources, avoiding unnecessary hacking SQL language
generation, string mangling, or (re)parsing overheads. This kind of
describes Calcite's raison d'être. If Phoenix is not embedding Calcite as
query planner, as it does not currently, it is independently useful to have
a public API for programmatic query plan construction given the current
implementation regardless. If Phoenix were to embed Calcite as query
planner, you'd probably get a ton of re-use among internal and external
users of the Calcite APIs. I'd think whatever option you might choose would
be informed by the suitability (or not) of embedding Calcite as Phoenix's
query planner, and how soon that might be expected to be feature complete.
For what it's worth. Again this extends possibilities for integration.


> 3. Better recommendations to users to not attempt certain queries.
>
> We definitively know that there are certain types of queries that
> Phoenix cannot support well (compared to optimal Phoenix use-cases).
> Users very commonly fall into such pitfalls on their own and this leaves
> a bad taste in their mouth (thinking that the product "stinks").
>
> Can we do a better job of telling the user when and why it happened?
> What would such a user-interaction model look like? Can we supplement
> the "why" with instructions of what to do differently (even if in the
> abstract)?
>
> 4. Phoenix-Calcite
>
> This was mentioned as a "nice to have". From what I understand, there
> was nothing explicitly from with the implementation or approach, just
> that it was a massive undertaking to continue with little immediate
> gain. Would this be a boon for us to try to continue in some form? Are
> there steps we can take that would help push us along the right path?
>
> Anyways, I'd love to hear everyone's thoughts. While the concerns were
> raised at HBaseCon Asia, the suggestions that accompany them here are
> largely mine ;). Feel free to break them out into their own threads if
> you think that would be better (or say that you disagree with me --
> that's cool too)!
>
> - Josh
>


-- 
Best regards,
Andrew

Words like orphans lost among the crosstalk, meaning torn from truth's
decrepit hands
   - A23, Crosstalk


[jira] [Created] (PHOENIX-4876) Delete returns incorrect number of rows affected in some case

2018-08-28 Thread William Shen (JIRA)
William Shen created PHOENIX-4876:
-

 Summary: Delete returns incorrect number of rows affected in some 
case
 Key: PHOENIX-4876
 URL: https://issues.apache.org/jira/browse/PHOENIX-4876
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.13.0
Reporter: William Shen


Running Phoenix 4.13 and encountering deletion of a non-existing row returning 
"1 row affected" instead of "No rows affected".

Here is a simplified reproducible case:
{code:java}
> CREATE TABLE IF NOT EXISTS TEST (A BIGINT PRIMARY KEY, B BIGINT);

No rows affected (2.524 seconds)

> DELETE FROM TEST WHERE A = 0;

1 row affected (0.107 seconds)

> DELETE FROM TEST WHERE B = 0;

No rows affected (0.007 seconds)

> DELETE FROM TEST WHERE A = 0 AND B = 0;

No rows affected (0.007 seconds)

> DELETE FROM TEST WHERE A = 0;

1 row affected (0.007 seconds)

> SELECT * FROM TEST;

+++

| A | B |

+++

+++

No rows selected (0.023 seconds)

> SELECT COUNT(*) FROM TEST;

+---+

| COUNT(1) |

+---+

| 0         |

+---+

1 row selected (0.014 seconds){code}
Expected: 
{code:java}
> DELETE FROM TEST WHERE A = 0;
No rows affected{code}
Actual:
{code:java}
> DELETE FROM TEST WHERE A = 0;
1 row affected{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4875) Don't acquire a mutex while dropping a table and while creating a view

2018-08-28 Thread Thomas D'Silva (JIRA)
Thomas D'Silva created PHOENIX-4875:
---

 Summary: Don't acquire a mutex while dropping a table and while 
creating a view
 Key: PHOENIX-4875
 URL: https://issues.apache.org/jira/browse/PHOENIX-4875
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Thomas D'Silva
Assignee: Thomas D'Silva


Acquiring this mutex will slow down view creation and is not required.
This was done to prevent a base table being dropped while creating a view at 
the same time. However even if this happens the next time a view is resolved 
the user will get a TableNotFoundException. 





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4849) UPSERT SELECT fails with stale region boundary exception after a split

2018-08-28 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-4849:
---
Attachment: PHOENIX-4849-v3.patch

> UPSERT SELECT fails with stale region boundary exception after a split
> --
>
> Key: PHOENIX-4849
> URL: https://issues.apache.org/jira/browse/PHOENIX-4849
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Akshita Malhotra
>Assignee: Lars Hofhansl
>Priority: Major
> Attachments: PHOENIX-4849-complete-1.4.txt, PHOENIX-4849-fix.txt, 
> PHOENIX-4849-v2.patch, PHOENIX-4849-v3.patch, PHOENIX-4849.patch
>
>
> UPSERT SELECT throws a StaleRegionBoundaryCacheException immediately after a 
> split. On the other hand, an upsert followed by a select for example works 
> absolutely fine
> org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108 
> (XCL08): Cache of region boundaries are out of date.
> at 
> org.apache.phoenix.exception.SQLExceptionCode$14.newException(SQLExceptionCode.java:365)
>  at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>  at 
> org.apache.phoenix.util.ServerUtil.parseRemoteException(ServerUtil.java:183)
>  at 
> org.apache.phoenix.util.ServerUtil.parseServerExceptionOrNull(ServerUtil.java:167)
>  at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:134)
>  at 
> org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:153)
>  at 
> org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:228)
>  at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
>  at 
> org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
>  at 
> org.apache.phoenix.iterate.LookAheadResultIterator.peek(LookAheadResultIterator.java:73)
>  at 
> org.apache.phoenix.iterate.SerialIterators$SerialIterator.nextIterator(SerialIterators.java:187)
>  at 
> org.apache.phoenix.iterate.SerialIterators$SerialIterator.currentIterator(SerialIterators.java:160)
>  at 
> org.apache.phoenix.iterate.SerialIterators$SerialIterator.peek(SerialIterators.java:218)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:100)
>  at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>  at 
> org.apache.phoenix.iterate.DelegateResultIterator.next(DelegateResultIterator.java:44)
>  at 
> org.apache.phoenix.iterate.LimitingResultIterator.next(LimitingResultIterator.java:47)
>  at org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:805)
>  at 
> org.apache.phoenix.compile.UpsertCompiler.upsertSelect(UpsertCompiler.java:219)
>  at 
> org.apache.phoenix.compile.UpsertCompiler$ClientUpsertSelectMutationPlan.execute(UpsertCompiler.java:1292)
>  at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
>  at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
>  at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173)
>  at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:183)
>  at 
> org.apache.phoenix.end2end.UpsertSelectAfterSplitTest.upsertSelectData1(UpsertSelectAfterSplitTest.java:109)
>  at 
> org.apache.phoenix.end2end.UpsertSelectAfterSplitTest.testUpsertSelect(UpsertSelectAfterSplitTest.java:59)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>  at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>  at 

[jira] [Reopened] (PHOENIX-4839) IndexHalfStoreFileReaderGenerator throws NullPointerException

2018-08-28 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva reopened PHOENIX-4839:
-

[~mnpoonia]

The following two tests are failing with this patch, can you please take a look?

org.apache.phoenix.end2end.LocalIndexSplitMergeIT.testLocalIndexScanAfterRegionsMerge
org.apache.phoenix.end2end.LocalIndexSplitMergeIT.testLocalIndexScanWithMergeSpecialCase

> IndexHalfStoreFileReaderGenerator throws NullPointerException
> -
>
> Key: PHOENIX-4839
> URL: https://issues.apache.org/jira/browse/PHOENIX-4839
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-4839-4.x-HBase-1.3.patch, 
> PHOENIX-4839-HBase-1.3.patch, PHOENIX-4839.patch
>
>
> {noformat}
> 018-08-08 09:15:25,075 FATAL [7,queue=3,port=60020] 
> regionserver.HRegionServer - ABORTING region server 
> phoenix1,60020,1533715370645: The coprocessor 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator threw 
> java.lang.NullPointerException
>  java.lang.NullPointerException
>  at java.util.ArrayList.addAll(ArrayList.java:577)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.getLocalIndexScanners(IndexHalfStoreFileReaderGenerator.java:398)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.access$000(IndexHalfStoreFileReaderGenerator.java:73)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.getScannersNoCompaction(IndexHalfStoreFileReaderGenerator.java:332)
>  at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.(StoreScanner.java:214)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator$1.(IndexHalfStoreFileReaderGenerator.java:327)
>  at 
> org.apache.hadoop.hbase.regionserver.IndexHalfStoreFileReaderGenerator.preStoreScannerOpen(IndexHalfStoreFileReaderGenerator.java:326)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$51.call(RegionCoprocessorHost.java:1335)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1693)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1771)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1734)
>  at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preStoreScannerOpen(RegionCoprocessorHost.java:1330)
>  at org.apache.hadoop.hbase.regionserver.HStore.getScanner(HStore.java:2169)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.initializeScanners(HRegion.java:5916)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.(HRegion.java:5890)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.instantiateRegionScanner(HRegion.java:2739)
>  at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2719)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7197)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7156)
>  at org.apache.hadoop.hbase.regionserver.HRegion.get(HRegion.java:7149)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2249)
>  at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:35068)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2373)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168
>  {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4873) Document missing time and timestamp formatting configuration properties

2018-08-28 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved PHOENIX-4873.
-
Resolution: Fixed

> Document missing time and timestamp formatting configuration properties
> ---
>
> Key: PHOENIX-4873
> URL: https://issues.apache.org/jira/browse/PHOENIX-4873
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Attachments: PHOENIX-4873.diff
>
>
> [https://phoenix.apache.org/tuning.html] lacks entries for 
> phoenix.query.timeFormat, phoenix.query.timestampFormat which are used by 
> psql to parse out TIME and TIMESTAMP data types.
> Add them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4873) Document missing time and timestamp formatting configuration properties

2018-08-28 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4873:

Attachment: PHOENIX-4873.diff

> Document missing time and timestamp formatting configuration properties
> ---
>
> Key: PHOENIX-4873
> URL: https://issues.apache.org/jira/browse/PHOENIX-4873
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Attachments: PHOENIX-4873.diff
>
>
> [https://phoenix.apache.org/tuning.html] lacks entries for 
> phoenix.query.timeFormat, phoenix.query.timestampFormat which are used by 
> psql to parse out TIME and TIMESTAMP data types.
> Add them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4874) psql doesn't support date/time with values smaller than milliseconds

2018-08-28 Thread Josh Elser (JIRA)
Josh Elser created PHOENIX-4874:
---

 Summary: psql doesn't support date/time with values smaller than 
milliseconds
 Key: PHOENIX-4874
 URL: https://issues.apache.org/jira/browse/PHOENIX-4874
 Project: Phoenix
  Issue Type: Task
Reporter: Josh Elser
Assignee: Josh Elser


[https://phoenix.apache.org/tuning.html] lacks entries for 
phoenix.query.timeFormat, phoenix.query.timestampFormat which are used by psql 
to parse out TIME and TIMESTAMP data types.

Add them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4873) Document missing time and timestamp formatting configuration properties

2018-08-28 Thread Josh Elser (JIRA)
Josh Elser created PHOENIX-4873:
---

 Summary: Document missing time and timestamp formatting 
configuration properties
 Key: PHOENIX-4873
 URL: https://issues.apache.org/jira/browse/PHOENIX-4873
 Project: Phoenix
  Issue Type: Task
Reporter: Josh Elser
Assignee: Josh Elser


[https://phoenix.apache.org/tuning.html] lacks entries for 
phoenix.query.timeFormat, phoenix.query.timestampFormat which are used by psql 
to parse out TIME and TIMESTAMP data types.

Add them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4859) Using local index in where statement for join (only rhs table) query fails

2018-08-28 Thread Rajeshbabu Chintaguntla (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-4859:
-
Attachment: PHOENIX-4859.patch

> Using local index in where statement for join (only rhs table) query fails
> --
>
> Key: PHOENIX-4859
> URL: https://issues.apache.org/jira/browse/PHOENIX-4859
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
>Reporter: Subrat Mishra
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4859.patch
>
>
> Consider a simple scenario:
> {code:java}
> CREATE TABLE cust_data (customer_id integer primary key, postal_code varchar, 
> country_code varchar); 
> UPSERT INTO cust_data values(1,'560103','IN'); 
> CREATE LOCAL INDEX ZIP_INDEX ON cust_data(postal_code); 
> SELECT * from cust_data c1, cust_data c2 where c1.customer_id=c2.customer_id 
> and c2.postal_code='560103'; {code}
> Query fails with an exception:
> {code:java}
> java.lang.NullPointerException
> at 
> org.apache.phoenix.schema.LocalIndexDataColumnRef.(LocalIndexDataColumnRef.java:40)
> at 
> org.apache.phoenix.compile.ProjectionCompiler.projectAllIndexColumns(ProjectionCompiler.java:221)
> at 
> org.apache.phoenix.compile.ProjectionCompiler.compile(ProjectionCompiler.java:389)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:561)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileJoinQuery(QueryCompiler.java:320)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileJoinQuery(QueryCompiler.java:228)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:191)
> at org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:153)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.getApplicablePlans(QueryOptimizer.java:190)
> at 
> org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:112)
> at org.apache.phoenix.optimize.QueryOptimizer.optimize(QueryOptimizer.java:98)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:309)
> at org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:291)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:290)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:283)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1830)
> at sqlline.Commands.execute(Commands.java:822)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:813)
> at sqlline.SqlLine.begin(SqlLine.java:686)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:291){code}
> Interestingly if we change c2.postal_code to c1.postal_code in where clause 
> like shown below then the query runs fine. 
> {code:java}
> SELECT * from cust_data c1, cust_data c2 where c1.customer_id=c2.customer_id 
> and c1.postal_code='560103'; {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4869) Empty row when using OFFSET + LIMIT

2018-08-28 Thread Gardella Juan Pablo (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gardella Juan Pablo updated PHOENIX-4869:
-
Description: 
I'm using [Phoenix shipped 
|https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.1/bk_release-notes/content/patch_phoenix.html]at
 [HDP 
2.6.1|https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.1/index.html]. I 
have a table defined as:
{code:sql}
 
create table test (
   id VARCHAR not null primary key,
   json VARCHAR
)
{code}
It has 2559774 rows. If I execute the following query, it returns a row with a 
null value.
{code:sql}
select * from
(
   SELECT ID
   FROM test
   LIMIT 10 OFFSET 10
)
where ID is null
{code}
 
 I was reviewing the git logs and I didn't see any commit related to that[1]. 
Notice the query for OFFSET and LIMIT lowers than 1 does not fail. I've 
attached a capture of the query results.

!empty_row.png!

Notice if I execute SELECT ID FROM test WHERE ID IS NULL returns an empty 
result as expected.

!no_results.png! 
  

Thread: 
[https://lists.apache.org/thread.html/fd54a0cf623a20ad54d1ac65656d01add8eeef74ad51fb1674afb566@%3Cuser.phoenix.apache.org%3E]

 [1] Similar but not equal is PHOENIX-3422. The results is no data instead of 
null row.

 

  was:
I'm using [Phoenix shipped 
|https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.3/bk_release-notes/content/patch_phoenix.html]at
 [HDP 
2.6.3|https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.3/index.html]. I 
have a table defined as:
{code:sql}
 
create table test (
   id VARCHAR not null primary key,
   json VARCHAR
)
{code}
It has 2559774 rows. If I execute the following query, it returns a row with a 
null value.
{code:sql}
select * from
(
   SELECT ID
   FROM test
   LIMIT 10 OFFSET 10
)
where ID is null
{code}
 
 I was reviewing the git logs and I didn't see any commit related to that[1]. 
Notice the query for OFFSET and LIMIT lowers than 1 does not fail. I've 
attached a capture of the query results.

!empty_row.png!

Notice if I execute SELECT ID FROM test WHERE ID IS NULL returns an empty 
result as expected.

!no_results.png! 
  

Thread: 
[https://lists.apache.org/thread.html/fd54a0cf623a20ad54d1ac65656d01add8eeef74ad51fb1674afb566@%3Cuser.phoenix.apache.org%3E]

 [1] Similar but not equal is PHOENIX-3422. The results is no data instead of 
null row.

 


> Empty row when using OFFSET + LIMIT
> ---
>
> Key: PHOENIX-4869
> URL: https://issues.apache.org/jira/browse/PHOENIX-4869
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Gardella Juan Pablo
>Priority: Major
> Attachments: empty_row.png, no_results.png
>
>
> I'm using [Phoenix shipped 
> |https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.1/bk_release-notes/content/patch_phoenix.html]at
>  [HDP 
> 2.6.1|https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.1/index.html]. I 
> have a table defined as:
> {code:sql}
>  
> create table test (
>    id VARCHAR not null primary key,
>    json VARCHAR
> )
> {code}
> It has 2559774 rows. If I execute the following query, it returns a row with 
> a null value.
> {code:sql}
> select * from
> (
>    SELECT ID
>    FROM test
>    LIMIT 10 OFFSET 10
> )
> where ID is null
> {code}
>  
>  I was reviewing the git logs and I didn't see any commit related to that[1]. 
> Notice the query for OFFSET and LIMIT lowers than 1 does not fail. I've 
> attached a capture of the query results.
> !empty_row.png!
> Notice if I execute SELECT ID FROM test WHERE ID IS NULL returns an empty 
> result as expected.
> !no_results.png! 
>   
> Thread: 
> [https://lists.apache.org/thread.html/fd54a0cf623a20ad54d1ac65656d01add8eeef74ad51fb1674afb566@%3Cuser.phoenix.apache.org%3E]
>  [1] Similar but not equal is PHOENIX-3422. The results is no data instead of 
> null row.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)