[GitHub] phoenix pull request #313: PHOENIX-4799 Write cells using checkAndMutate to ...

2018-07-31 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/313#discussion_r206741679
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java ---
@@ -355,5 +355,19 @@
+ "," + COLUMN_NAME + "," + COLUMN_FAMILY + "))\n" + 
HConstants.VERSIONS + "=%s,\n"
+ HColumnDescriptor.KEEP_DELETED_CELLS + "=%s,\n" + 
PhoenixDatabaseMetaData.TRANSACTIONAL + "="
+ Boolean.FALSE;
+   
+public static final String CREATE_MUTEX_METADTA =
+   "CREATE IMMUTABLE TABLE " + SYSTEM_CATALOG_SCHEMA + ".\"" + 
SYSTEM_MUTEX_TABLE_NAME + "\"(\n" +
+   // Pk columns
+   TENANT_ID + " VARCHAR NULL," +
+   TABLE_SCHEM + " VARCHAR NULL," +
+   TABLE_NAME + " VARCHAR NOT NULL," +
+   COLUMN_NAME + " VARCHAR NULL," + // null for table row
+   COLUMN_FAMILY + " VARCHAR NULL " + // using for CF to 
uniqueness for columns
+   "CONSTRAINT " + SYSTEM_TABLE_PK_NAME + " PRIMARY KEY (" + 
TENANT_ID + ","
+   + TABLE_SCHEM + "," + TABLE_NAME + "," + COLUMN_NAME + "," 
+ COLUMN_FAMILY + "))\n" +
+   HConstants.VERSIONS + "=%s,\n" +
+   HColumnDescriptor.KEEP_DELETED_CELLS + "=%s,\n" +
+   PhoenixDatabaseMetaData.TRANSACTIONAL + "=" + Boolean.FALSE;
--- End diff --

Yes, probably our comments have crossed.


---


[GitHub] phoenix pull request #313: PHOENIX-4799 Write cells using checkAndMutate to ...

2018-07-31 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/313#discussion_r206738423
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2723,13 +2712,10 @@ private void 
createOtherSystemTables(PhoenixConnection metaConnection, HBaseAdmi
 try {
 
metaConnection.createStatement().executeUpdate(getChildLinkDDL());
 } catch (TableAlreadyExistsException e) {}
-// Catch the IOException to log the error message and then bubble 
it up for the client to retry.
 try {
-createSysMutexTableIfNotExists(hbaseAdmin);
-} catch (IOException exception) {
-logger.error("Failed to created SYSMUTEX table. Upgrade or 
migration is not possible without it. Please retry.");
-throw exception;
-}
+metaConnection.createStatement().executeUpdate(getMutexDDL());
--- End diff --

ok, then we are good.


---


[GitHub] phoenix pull request #313: PHOENIX-4799 Write cells using checkAndMutate to ...

2018-07-31 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/313#discussion_r206738342
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java ---
@@ -355,5 +355,19 @@
+ "," + COLUMN_NAME + "," + COLUMN_FAMILY + "))\n" + 
HConstants.VERSIONS + "=%s,\n"
+ HColumnDescriptor.KEEP_DELETED_CELLS + "=%s,\n" + 
PhoenixDatabaseMetaData.TRANSACTIONAL + "="
+ Boolean.FALSE;
+   
+public static final String CREATE_MUTEX_METADTA =
+   "CREATE IMMUTABLE TABLE " + SYSTEM_CATALOG_SCHEMA + ".\"" + 
SYSTEM_MUTEX_TABLE_NAME + "\"(\n" +
+   // Pk columns
+   TENANT_ID + " VARCHAR NULL," +
+   TABLE_SCHEM + " VARCHAR NULL," +
+   TABLE_NAME + " VARCHAR NOT NULL," +
+   COLUMN_NAME + " VARCHAR NULL," + // null for table row
+   COLUMN_FAMILY + " VARCHAR NULL " + // using for CF to 
uniqueness for columns
+   "CONSTRAINT " + SYSTEM_TABLE_PK_NAME + " PRIMARY KEY (" + 
TENANT_ID + ","
+   + TABLE_SCHEM + "," + TABLE_NAME + "," + COLUMN_NAME + "," 
+ COLUMN_FAMILY + "))\n" +
+   HConstants.VERSIONS + "=%s,\n" +
+   HColumnDescriptor.KEEP_DELETED_CELLS + "=%s,\n" +
+   PhoenixDatabaseMetaData.TRANSACTIONAL + "=" + Boolean.FALSE;
--- End diff --

ok make sense, so can you just update QueryServices.writeMutexCell(byte[] 
rowKey) and deleteMutexCell(byte[] rowKey) to accept all the arguments which 
form the right primary key for the table for consistency. 


---


[GitHub] phoenix pull request #313: PHOENIX-4799 Write cells using checkAndMutate to ...

2018-07-31 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/313#discussion_r206735726
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java ---
@@ -355,5 +355,19 @@
+ "," + COLUMN_NAME + "," + COLUMN_FAMILY + "))\n" + 
HConstants.VERSIONS + "=%s,\n"
+ HColumnDescriptor.KEEP_DELETED_CELLS + "=%s,\n" + 
PhoenixDatabaseMetaData.TRANSACTIONAL + "="
+ Boolean.FALSE;
+   
+public static final String CREATE_MUTEX_METADTA =
+   "CREATE IMMUTABLE TABLE " + SYSTEM_CATALOG_SCHEMA + ".\"" + 
SYSTEM_MUTEX_TABLE_NAME + "\"(\n" +
+   // Pk columns
+   TENANT_ID + " VARCHAR NULL," +
+   TABLE_SCHEM + " VARCHAR NULL," +
+   TABLE_NAME + " VARCHAR NOT NULL," +
+   COLUMN_NAME + " VARCHAR NULL," + // null for table row
+   COLUMN_FAMILY + " VARCHAR NULL " + // using for CF to 
uniqueness for columns
+   "CONSTRAINT " + SYSTEM_TABLE_PK_NAME + " PRIMARY KEY (" + 
TENANT_ID + ","
+   + TABLE_SCHEM + "," + TABLE_NAME + "," + COLUMN_NAME + "," 
+ COLUMN_FAMILY + "))\n" +
+   HConstants.VERSIONS + "=%s,\n" +
+   HColumnDescriptor.KEEP_DELETED_CELLS + "=%s,\n" +
+   PhoenixDatabaseMetaData.TRANSACTIONAL + "=" + Boolean.FALSE;
--- End diff --

Why there is a need of creating a Phoenix managed table for mutex? 
And also API in QueryServices.writeMutexCell(byte[] rowKey) and 
deleteMutexCell(byte[] rowKey) don't enforce the schema of the table will be 
followed. 



---


[GitHub] phoenix pull request #313: PHOENIX-4799 Write cells using checkAndMutate to ...

2018-07-31 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/313#discussion_r206736061
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2723,13 +2712,10 @@ private void 
createOtherSystemTables(PhoenixConnection metaConnection, HBaseAdmi
 try {
 
metaConnection.createStatement().executeUpdate(getChildLinkDDL());
 } catch (TableAlreadyExistsException e) {}
-// Catch the IOException to log the error message and then bubble 
it up for the client to retry.
 try {
-createSysMutexTableIfNotExists(hbaseAdmin);
-} catch (IOException exception) {
-logger.error("Failed to created SYSMUTEX table. Upgrade or 
migration is not possible without it. Please retry.");
-throw exception;
-}
+metaConnection.createStatement().executeUpdate(getMutexDDL());
--- End diff --

Shouldn't the mutex table available and have acquired a mutex already for 
the upgrade before you call createOtherSystemTables


---


[GitHub] phoenix pull request #313: PHOENIX-4799 Write cells using checkAndMutate to ...

2018-07-30 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/313#discussion_r206330492
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java ---
@@ -3604,6 +3681,18 @@ public MutationState addColumn(PTable table, 
List origColumnDefs,
 }
 } finally {
 connection.setAutoCommit(wasAutoCommit);
+if (acquiredMutex && !columns.isEmpty()) {
--- End diff --

where is acquiredMutex set to true?




---


[GitHub] phoenix pull request #313: PHOENIX-4799 Write cells using checkAndMutate to ...

2018-07-26 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/313#discussion_r205594858
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java ---
@@ -3604,6 +3675,17 @@ public MutationState addColumn(PTable table, 
List origColumnDefs,
 }
 } finally {
 connection.setAutoCommit(wasAutoCommit);
+if (!columns.isEmpty()) {
+for (PColumn pColumn : columns) {
+PName physicalName = table.getPhysicalName();
+String physicalSchemaName =
+
SchemaUtil.getSchemaNameFromFullName(physicalName.getString());
+String physicalTableName =
+
SchemaUtil.getTableNameFromFullName(physicalName.getString());
+deleteCell(null, physicalSchemaName, physicalTableName,
--- End diff --

@twdsilva , shouldn't you check whether you have acquired a lock(or 
inserted the cell in Mutex) before deleting the lock cell, because here you 
might be deleting the locks acquired by some other threads. And, You may need 
to do this per column I think.


---


[GitHub] phoenix pull request #313: PHOENIX-4799 Write cells using checkAndMutate to ...

2018-07-26 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/313#discussion_r205595041
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java ---
@@ -3034,6 +3088,11 @@ MutationState dropTable(String schemaName, String 
tableName, String parentTableN
 return new MutationState(0, 0, connection);
 } finally {
 connection.setAutoCommit(wasAutoCommit);
+// acquire a mutex on the table to prevent creating views 
while concurrently
+// dropping the base table
+if (tableType == PTableType.TABLE) {
+deleteCell(null, schemaName, tableName, null);
+}
--- End diff --

nit: Please update the comment here that you are releasing a mutex.


---


[GitHub] phoenix issue #295: PHOENIX-4579: Add a config to conditionally create Phoen...

2018-04-03 Thread ankitsinghal
Github user ankitsinghal commented on the issue:

https://github.com/apache/phoenix/pull/295
  
bq. As of now, when the server gets connected to for the first time i.e. no 
SYSTEM tables exist, we call
ensureSystemTablesMigratedToSystemNamespace, but this just creates the 
HBase SYSTEM namespace and returns without migrating tables since they haven't 
been created yet. However, later on when we do create other SYSTEM tables, we 
aren't migrating them to the SYSTEM namespace right away.

As per the current implementation, if there is namespace mapping enabled 
and no SYSTEM table exists then CREATE TABLE should automatically create SYSTEM 
tables in SYSTEM namespace. There should not be a need of migrating them. It 
used to work like this, is it not the case now?

bq. Now, as per the current design, whenever a second client with 
NS-mapping enabled connects to the server, we will migrate these SYSTEM tables 
to the SYSTEM namespace when ensureSystemTablesMigratedToSystemNamespace is 
called within init (with my PR, this will be called inside ensureTableCreated 
when the table is SYSTEM.CATALOG and NS-mapping is enabled, instead of in 
init). My question is, shouldn't we map SYSTEM tables to the SYSTEM namespace 
immediately after creating them?
First connection will be useless, if as per the new design, SYSTEM tables 
are not getting created in the correct namespace.



---


[GitHub] phoenix pull request #289: PHOENIX-4528 PhoenixAccessController checks permi...

2018-01-15 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/289#discussion_r161661425
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java ---
@@ -267,4 +267,26 @@ public void testMultiTenantTables() throws Exception {
 verifyAllowed(readMultiTenantTableWithIndex(VIEW1_TABLE_NAME, 
"o1"), regularUser2);
 verifyAllowed(readMultiTenantTableWithoutIndex(VIEW2_TABLE_NAME, 
"o2"), regularUser2);
 }
+
+/**
+ * Grant RX permissions on the schema to regularUser1,
+ * Creating view on a table with that schema by regularUser1 should be 
allowed
+ */
+@Test
+public void testCreateViewOnTableWithRXPermsOnSchema() throws 
Exception {
+
+startNewMiniCluster();
+grantSystemTableAccess(superUser1, regularUser1, regularUser2, 
regularUser3);
+
+if(isNamespaceMapped) {
+verifyAllowed(createSchema(SCHEMA_NAME), superUser1);
+verifyAllowed(createTable(FULL_TABLE_NAME), superUser1);
+verifyAllowed(grantPermissions("RX", regularUser1, 
SCHEMA_NAME, true), superUser1);
+} else {
+verifyAllowed(createTable(FULL_TABLE_NAME), superUser1);
+verifyAllowed(grantPermissions("RX", regularUser1, 
surroundWithDoubleQuotes(SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE), true), 
superUser1);
+}
+
+verifyAllowed(createView(VIEW1_TABLE_NAME, FULL_TABLE_NAME), 
regularUser1);
+}
--- End diff --

regularUser should also have read access on FULL_TABLE_NAME to create view 
right?


---


[GitHub] phoenix pull request #284: PHOENIX-4424 Allow users to create DEFAULT and HB...

2017-11-30 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/284#discussion_r154282569
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java ---
@@ -4125,7 +4125,7 @@ public MutationState 
createSchema(CreateSchemaStatement create) throws SQLExcept
 
 private void validateSchema(String schemaName) throws SQLException {
 if (SchemaUtil.NOT_ALLOWED_SCHEMA_LIST.contains(
-schemaName.toUpperCase())) { throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.SCHEMA_NOT_ALLOWED)
--- End diff --

@karanmehta93 , I tried case-sensitive schemas with master code, I don't 
see any problem. can you check if normalization is happening at some other(or 
not at the right) place?
```
0: jdbc:phoenix:localhost> create schema "ABC";
No rows affected (0.224 seconds)
```
```
0: jdbc:phoenix:localhost> select TABLE_SCHEM from system.catalog;
+--+
| TABLE_SCHEM  |
+--+
| ABC  |
```
```
hbase(main):002:0> list_namespace
NAMESPACE
ABC
```
```
0: jdbc:phoenix:localhost> create schema "small_case"
. . . . . . . . . . . . .> ;
No rows affected (0.424 seconds)
0: jdbc:phoenix:localhost> select TABLE_SCHEM from system.catalog;
+--+
| TABLE_SCHEM  |
+--+
| small_case   |

hbase(main):003:0> list_namespace
NAMESPACE
small_case
```



---


[GitHub] phoenix pull request #283: PHOENIX-672 Add GRANT and REVOKE commands using H...

2017-11-22 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/283#discussion_r152656117
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/PhoenixAccessController.java
 ---
@@ -229,17 +227,12 @@ public void 
handleRequireAccessOnDependentTable(String request, String userName,
 + dependentTable);
 return;
 }
-if (isAutomaticGrantEnabled) {
--- End diff --

Oh, I see.. here we are just talking about removing a config, not the 
automatic grant flow in the coprocessor. I'm fine with either way, having these 
configs with suitable default or removing the configs completely.


---


[GitHub] phoenix pull request #283: PHOENIX-672 Add GRANT and REVOKE commands using H...

2017-11-22 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/283#discussion_r152648189
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/PhoenixAccessController.java
 ---
@@ -229,17 +227,12 @@ public void 
handleRequireAccessOnDependentTable(String request, String userName,
 + dependentTable);
 return;
 }
-if (isAutomaticGrantEnabled) {
--- End diff --

@karanmehta93 
Strict mode:- It will check permissions for dependent tables as well. For 
eg, If a user who has all access on data table is creating an index, then we 
need to ensure that all others users of data table can also access a new index 
table.
AutomaticGrant:- It will automatically grant required permissions to 
dependent table users.

@twdsilva , what about the case when a new index is been created?
Purpose of the automatic grant:- let's say there are three users A and B 
have READ permission on the data-table and user C has RWC permission on 
data-table. so if user B creates an index, then we need to ensure that user A 
and C should also be able to read the index and C should be able to write to 
this Index and can drop the index also. so we will give only the required 
permission to the users of data-table on the index table. So, Access should 
propagate like this. 

user | access data table | access on index table | with Automatic 
grant(access on index table will change like this) | comments
-- | -- | -- | -- | --
A | RAX | no access | RX | RX will be given on index table
B | RX | RWXC | RWXC | no grant will happen
C | RWXAC | no access | RWCX | read ,write and create will be given so that 
it can read/write to index table and drop as well.






---


[GitHub] phoenix pull request #283: PHOENIX-672 Add GRANT and REVOKE commands using H...

2017-11-21 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/283#discussion_r152483233
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/TableDDLPermissionsIT.java 
---
@@ -210,25 +98,20 @@ public Void run() throws Exception {
 return null;
 }
 });
-verifyAllowed(createSchema(schemaName), regularUser);
+verifyAllowed(createSchema(schemaName), regularUser1);
 // Unprivileged user cannot drop a schema
-verifyDenied(dropSchema(schemaName), unprivilegedUser);
-verifyDenied(createSchema(schemaName), unprivilegedUser);
+verifyDenied(dropSchema(schemaName), 
AccessDeniedException.class, unprivilegedUser);
+verifyDenied(createSchema(schemaName), 
AccessDeniedException.class, unprivilegedUser);
 
-verifyAllowed(dropSchema(schemaName), regularUser);
+verifyAllowed(dropSchema(schemaName), regularUser1);
 } finally {
 revokeAll();
 }
 }
 
 @Test
-public void testAutomaticGrantDisabled() throws Throwable{
-testIndexAndView(false);
-}
--- End diff --

Why is this removed?


---


[GitHub] phoenix pull request #283: PHOENIX-672 Add GRANT and REVOKE commands using H...

2017-11-21 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/283#discussion_r152483142
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java ---
@@ -4168,4 +4176,197 @@ public MutationState useSchema(UseSchemaStatement 
useSchemaStatement) throws SQL
 }
 return new MutationState(0, 0, connection);
 }
+
+public MutationState grantPermission(GrantStatement grantStatement) 
throws SQLException {
+
+StringBuffer grantPermLog = new StringBuffer();
+grantPermLog.append("Grant Permissions requested for user/group: " 
+ grantStatement.getName());
+if (grantStatement.getSchemaName() != null) {
+grantPermLog.append(" for Schema: " + 
grantStatement.getSchemaName());
+} else if (grantStatement.getTableName() != null) {
+grantPermLog.append(" for Table: " + 
grantStatement.getTableName());
+}
+grantPermLog.append(" Permissions: " + 
Arrays.toString(grantStatement.getPermsList()));
+logger.info(grantPermLog.toString());
+
+HConnection hConnection = 
connection.getQueryServices().getAdmin().getConnection();
+
+try {
+if (grantStatement.getSchemaName() != null) {
+// SYSTEM.CATALOG doesn't have any entry for "default" 
HBase namespace, hence we will bypass the check
+
if(!grantStatement.getSchemaName().equals(QueryConstants.HBASE_DEFAULT_SCHEMA_NAME))
 {
+
FromCompiler.getResolverForSchema(grantStatement.getSchemaName(), connection);
+}
+grantPermissionsToSchema(hConnection, grantStatement);
+
+} else if (grantStatement.getTableName() != null) {
+PTable inputTable = PhoenixRuntime.getTable(connection,
+
SchemaUtil.normalizeFullTableName(grantStatement.getTableName().toString()));
+if (!(PTableType.TABLE.equals(inputTable.getType()) || 
PTableType.SYSTEM.equals(inputTable.getType( {
+throw new AccessDeniedException("Cannot GRANT 
permissions on INDEX TABLES or VIEWS");
+}
+grantPermissionsToTables(hConnection, grantStatement, 
inputTable);
+
+} else {
+grantPermissionsToUser(hConnection, grantStatement);
--- End diff --

If for some reason grant doesn't succeed for all the tables. so do we have 
plan to give construct like "SHOW GRANTS" or something to the user to know what 
all grants are still there for the user or on the table.


---


[GitHub] phoenix pull request #283: PHOENIX-672 Add GRANT and REVOKE commands using H...

2017-11-21 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/283#discussion_r152481907
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/PhoenixAccessController.java
 ---
@@ -229,17 +227,12 @@ public void 
handleRequireAccessOnDependentTable(String request, String userName,
 + dependentTable);
 return;
 }
-if (isAutomaticGrantEnabled) {
--- End diff --

I think , we can keep this option. as it will help admin if he doesn't want 
to see the permissions problems on the dependent table and wants to manage the 
permission by himself once the table is created(with downtime) and don't want 
to rely on the automatic grant.


---


[GitHub] phoenix pull request #283: PHOENIX-672 Add GRANT and REVOKE commands using H...

2017-11-21 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/283#discussion_r152480973
  
--- Diff: 
phoenix-core/src/test/java/org/apache/phoenix/parse/QueryParserTest.java ---
@@ -66,6 +68,46 @@ private void parseQueryThatShouldFail(String sql) throws 
Exception {
 }
 }
 
+@Test
+public void testParseGrantQuery() throws Exception {
+
+String sql0 = "GRANT 'RX' ON SYSTEM.\"SEQUENCE\" TO 'user'";
+parseQuery(sql0);
+String sql1 = "GRANT 'RWXCA' ON TABLE some_table0 TO 'user0'";
+parseQuery(sql1);
+String sql2 = "GRANT 'RWX' ON some_table1 TO 'user1'";
+parseQuery(sql2);
+String sql3 = "GRANT 'CA' ON SCHEMA some_schema2 TO 'user2'";
+parseQuery(sql3);
+String sql4 = "GRANT 'RXW' ON some_table3 TO GROUP 'group3'";
+parseQuery(sql4);
+String sql5 = "GRANT 'RXW' ON \"some_schema5\".\"some_table5\" TO 
GROUP 'group5'";
+parseQuery(sql5);
+String sql6 = "GRANT 'RWA' TO 'user6'";
+parseQuery(sql6);
+String sql7 = "GRANT 'A' TO GROUP 'group7'";
+parseQuery(sql7);
--- End diff --

Do we have validation on permission type like fail when permission is other 
than 'RWXCA' at parsing side?


---


[GitHub] phoenix pull request #283: PHOENIX-672 Add GRANT and REVOKE commands using H...

2017-11-21 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/283#discussion_r152480644
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/TablesNotInSyncException.java
 ---
@@ -0,0 +1,16 @@
+package org.apache.phoenix.schema;
+
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.exception.SQLExceptionInfo;
+
+import java.sql.SQLException;
+
+public class TablesNotInSyncException extends SQLException {
+private static final long serialVersionUID = 1L;
+private static SQLExceptionCode code = 
SQLExceptionCode.TABLES_NOT_IN_SYNC;
+
+public TablesNotInSyncException(String table1, String table2, String 
diff) {
+super(new SQLExceptionInfo.Builder(code).setMessage("Table: " + 
table1 + " and Table: " + table2 + " differ in " + diff).build().toString(), 
code.getSQLState(), code.getErrorCode());
+}
+
+}
--- End diff --

we can add some comments here.


---


[GitHub] phoenix pull request #283: PHOENIX-672 Add GRANT and REVOKE commands using H...

2017-11-21 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/283#discussion_r152479671
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java ---
@@ -4168,4 +4176,197 @@ public MutationState useSchema(UseSchemaStatement 
useSchemaStatement) throws SQL
 }
 return new MutationState(0, 0, connection);
 }
+
+public MutationState grantPermission(GrantStatement grantStatement) 
throws SQLException {
+
+StringBuffer grantPermLog = new StringBuffer();
+grantPermLog.append("Grant Permissions requested for user/group: " 
+ grantStatement.getName());
+if (grantStatement.getSchemaName() != null) {
+grantPermLog.append(" for Schema: " + 
grantStatement.getSchemaName());
+} else if (grantStatement.getTableName() != null) {
+grantPermLog.append(" for Table: " + 
grantStatement.getTableName());
+}
+grantPermLog.append(" Permissions: " + 
Arrays.toString(grantStatement.getPermsList()));
+logger.info(grantPermLog.toString());
+
+HConnection hConnection = 
connection.getQueryServices().getAdmin().getConnection();
+
+try {
+if (grantStatement.getSchemaName() != null) {
+// SYSTEM.CATALOG doesn't have any entry for "default" 
HBase namespace, hence we will bypass the check
+
if(!grantStatement.getSchemaName().equals(QueryConstants.HBASE_DEFAULT_SCHEMA_NAME))
 {
+
FromCompiler.getResolverForSchema(grantStatement.getSchemaName(), connection);
+}
+grantPermissionsToSchema(hConnection, grantStatement);
+
+} else if (grantStatement.getTableName() != null) {
+PTable inputTable = PhoenixRuntime.getTable(connection,
+
SchemaUtil.normalizeFullTableName(grantStatement.getTableName().toString()));
+if (!(PTableType.TABLE.equals(inputTable.getType()) || 
PTableType.SYSTEM.equals(inputTable.getType( {
+throw new AccessDeniedException("Cannot GRANT 
permissions on INDEX TABLES or VIEWS");
+}
+grantPermissionsToTables(hConnection, grantStatement, 
inputTable);
+
+} else {
+grantPermissionsToUser(hConnection, grantStatement);
+}
+
+} catch (SQLException e) {
+// Bubble up the SQL Exception
+throw e;
+} catch (Throwable throwable) {
+// Wrap around other exceptions to PhoenixIOException (Ex: 
org.apache.hadoop.hbase.security.AccessDeniedException)
+throw ServerUtil.parseServerException(throwable);
+}
+
+return new MutationState(0, 0, connection);
+}
+
+private void grantPermissionsToTables(HConnection hConnection, 
GrantStatement grantStatement, PTable inputTable) throws Throwable {
+
+org.apache.hadoop.hbase.TableName tableName = 
SchemaUtil.getPhysicalTableName
+(inputTable.getName().getBytes(), 
inputTable.isNamespaceMapped());
+
+grantPermissionsToTable(hConnection, grantStatement, tableName);
+
+for(PTable indexTable : inputTable.getIndexes()) {
+// Local Indexes don't correspond to new physical table, they 
are just stored in separate CF of base table.
+if(indexTable.getIndexType().equals(IndexType.LOCAL)) {
+continue;
+}
+logger.info("Granting " + 
Arrays.toString(grantStatement.getPermsList()) +
+" perms to IndexTable: " + indexTable.getName() + " 
BaseTable: " + inputTable.getName());
+if (inputTable.isNamespaceMapped() != 
indexTable.isNamespaceMapped()) {
+throw new 
TablesNotInSyncException(inputTable.getTableName().getString(),
+indexTable.getTableName().getString(), "Namespace 
properties");
+}
+tableName = 
SchemaUtil.getPhysicalTableName(indexTable.getName().getBytes(), 
indexTable.isNamespaceMapped());
+grantPermissionsToTable(hConnection, grantStatement, 
tableName);
--- End diff --

you can have a common method to give you set of physical tables on which 
revoke and grant is required.


---


[GitHub] phoenix pull request #283: PHOENIX-672 Add GRANT and REVOKE commands using H...

2017-11-21 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/283#discussion_r152478348
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java ---
@@ -4168,4 +4176,197 @@ public MutationState useSchema(UseSchemaStatement 
useSchemaStatement) throws SQL
 }
 return new MutationState(0, 0, connection);
 }
+
+public MutationState grantPermission(GrantStatement grantStatement) 
throws SQLException {
+
+StringBuffer grantPermLog = new StringBuffer();
+grantPermLog.append("Grant Permissions requested for user/group: " 
+ grantStatement.getName());
+if (grantStatement.getSchemaName() != null) {
+grantPermLog.append(" for Schema: " + 
grantStatement.getSchemaName());
+} else if (grantStatement.getTableName() != null) {
+grantPermLog.append(" for Table: " + 
grantStatement.getTableName());
+}
+grantPermLog.append(" Permissions: " + 
Arrays.toString(grantStatement.getPermsList()));
+logger.info(grantPermLog.toString());
+
+HConnection hConnection = 
connection.getQueryServices().getAdmin().getConnection();
+
+try {
+if (grantStatement.getSchemaName() != null) {
+// SYSTEM.CATALOG doesn't have any entry for "default" 
HBase namespace, hence we will bypass the check
+
if(!grantStatement.getSchemaName().equals(QueryConstants.HBASE_DEFAULT_SCHEMA_NAME))
 {
+
FromCompiler.getResolverForSchema(grantStatement.getSchemaName(), connection);
+}
+grantPermissionsToSchema(hConnection, grantStatement);
+
+} else if (grantStatement.getTableName() != null) {
+PTable inputTable = PhoenixRuntime.getTable(connection,
+
SchemaUtil.normalizeFullTableName(grantStatement.getTableName().toString()));
+if (!(PTableType.TABLE.equals(inputTable.getType()) || 
PTableType.SYSTEM.equals(inputTable.getType( {
+throw new AccessDeniedException("Cannot GRANT 
permissions on INDEX TABLES or VIEWS");
+}
+grantPermissionsToTables(hConnection, grantStatement, 
inputTable);
+
+} else {
+grantPermissionsToUser(hConnection, grantStatement);
+}
+
+} catch (SQLException e) {
+// Bubble up the SQL Exception
+throw e;
+} catch (Throwable throwable) {
+// Wrap around other exceptions to PhoenixIOException (Ex: 
org.apache.hadoop.hbase.security.AccessDeniedException)
+throw ServerUtil.parseServerException(throwable);
+}
+
+return new MutationState(0, 0, connection);
+}
+
+private void grantPermissionsToTables(HConnection hConnection, 
GrantStatement grantStatement, PTable inputTable) throws Throwable {
+
+org.apache.hadoop.hbase.TableName tableName = 
SchemaUtil.getPhysicalTableName
+(inputTable.getName().getBytes(), 
inputTable.isNamespaceMapped());
+
+grantPermissionsToTable(hConnection, grantStatement, tableName);
+
+for(PTable indexTable : inputTable.getIndexes()) {
+// Local Indexes don't correspond to new physical table, they 
are just stored in separate CF of base table.
+if(indexTable.getIndexType().equals(IndexType.LOCAL)) {
+continue;
+}
+logger.info("Granting " + 
Arrays.toString(grantStatement.getPermsList()) +
+" perms to IndexTable: " + indexTable.getName() + " 
BaseTable: " + inputTable.getName());
+if (inputTable.isNamespaceMapped() != 
indexTable.isNamespaceMapped()) {
+throw new 
TablesNotInSyncException(inputTable.getTableName().getString(),
+indexTable.getTableName().getString(), "Namespace 
properties");
+}
+tableName = 
SchemaUtil.getPhysicalTableName(indexTable.getName().getBytes(), 
indexTable.isNamespaceMapped());
+grantPermissionsToTable(hConnection, grantStatement, 
tableName);
+}
+
+byte[] viewIndexTableBytes = 
MetaDataUtil.getViewIndexPhysicalName(inputTable.getPhysicalName().getBytes());
+tableName = 
org.apache.hadoop.hbase.TableName.valueOf(viewIndexTableBytes);
+boolean viewIndexTableExists = 
connection.getQueryServices().getAdmin().tableExists(tableName);
+if(!viewIndexTableExists && inputTable.isMul

[GitHub] phoenix pull request #283: PHOENIX-672 Add GRANT and REVOKE commands using H...

2017-11-21 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/283#discussion_r152478050
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java ---
@@ -4168,4 +4176,197 @@ public MutationState useSchema(UseSchemaStatement 
useSchemaStatement) throws SQL
 }
 return new MutationState(0, 0, connection);
 }
+
+public MutationState grantPermission(GrantStatement grantStatement) 
throws SQLException {
+
+StringBuffer grantPermLog = new StringBuffer();
+grantPermLog.append("Grant Permissions requested for user/group: " 
+ grantStatement.getName());
+if (grantStatement.getSchemaName() != null) {
+grantPermLog.append(" for Schema: " + 
grantStatement.getSchemaName());
+} else if (grantStatement.getTableName() != null) {
+grantPermLog.append(" for Table: " + 
grantStatement.getTableName());
+}
+grantPermLog.append(" Permissions: " + 
Arrays.toString(grantStatement.getPermsList()));
+logger.info(grantPermLog.toString());
+
+HConnection hConnection = 
connection.getQueryServices().getAdmin().getConnection();
+
+try {
+if (grantStatement.getSchemaName() != null) {
+// SYSTEM.CATALOG doesn't have any entry for "default" 
HBase namespace, hence we will bypass the check
+
if(!grantStatement.getSchemaName().equals(QueryConstants.HBASE_DEFAULT_SCHEMA_NAME))
 {
+
FromCompiler.getResolverForSchema(grantStatement.getSchemaName(), connection);
+}
+grantPermissionsToSchema(hConnection, grantStatement);
+
+} else if (grantStatement.getTableName() != null) {
+PTable inputTable = PhoenixRuntime.getTable(connection,
+
SchemaUtil.normalizeFullTableName(grantStatement.getTableName().toString()));
+if (!(PTableType.TABLE.equals(inputTable.getType()) || 
PTableType.SYSTEM.equals(inputTable.getType( {
+throw new AccessDeniedException("Cannot GRANT 
permissions on INDEX TABLES or VIEWS");
+}
+grantPermissionsToTables(hConnection, grantStatement, 
inputTable);
+
+} else {
+grantPermissionsToUser(hConnection, grantStatement);
+}
+
+} catch (SQLException e) {
+// Bubble up the SQL Exception
+throw e;
+} catch (Throwable throwable) {
+// Wrap around other exceptions to PhoenixIOException (Ex: 
org.apache.hadoop.hbase.security.AccessDeniedException)
+throw ServerUtil.parseServerException(throwable);
+}
+
+return new MutationState(0, 0, connection);
+}
+
+private void grantPermissionsToTables(HConnection hConnection, 
GrantStatement grantStatement, PTable inputTable) throws Throwable {
+
+org.apache.hadoop.hbase.TableName tableName = 
SchemaUtil.getPhysicalTableName
+(inputTable.getName().getBytes(), 
inputTable.isNamespaceMapped());
+
+grantPermissionsToTable(hConnection, grantStatement, tableName);
+
+for(PTable indexTable : inputTable.getIndexes()) {
+// Local Indexes don't correspond to new physical table, they 
are just stored in separate CF of base table.
+if(indexTable.getIndexType().equals(IndexType.LOCAL)) {
+continue;
+}
+logger.info("Granting " + 
Arrays.toString(grantStatement.getPermsList()) +
+" perms to IndexTable: " + indexTable.getName() + " 
BaseTable: " + inputTable.getName());
+if (inputTable.isNamespaceMapped() != 
indexTable.isNamespaceMapped()) {
+throw new 
TablesNotInSyncException(inputTable.getTableName().getString(),
+indexTable.getTableName().getString(), "Namespace 
properties");
+}
+tableName = 
SchemaUtil.getPhysicalTableName(indexTable.getName().getBytes(), 
indexTable.isNamespaceMapped());
+grantPermissionsToTable(hConnection, grantStatement, 
tableName);
+}
+
+byte[] viewIndexTableBytes = 
MetaDataUtil.getViewIndexPhysicalName(inputTable.getPhysicalName().getBytes());
+tableName = 
org.apache.hadoop.hbase.TableName.valueOf(viewIndexTableBytes);
+boolean viewIndexTableExists = 
connection.getQueryServices().getAdmin().tableExists(tableName);
+if(!viewIndexTableExists && inputTable.isMul

[GitHub] phoenix pull request #283: PHOENIX-672 Add GRANT and REVOKE commands using H...

2017-11-21 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/283#discussion_r152477915
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java ---
@@ -4168,4 +4176,197 @@ public MutationState useSchema(UseSchemaStatement 
useSchemaStatement) throws SQL
 }
 return new MutationState(0, 0, connection);
 }
+
+public MutationState grantPermission(GrantStatement grantStatement) 
throws SQLException {
+
+StringBuffer grantPermLog = new StringBuffer();
+grantPermLog.append("Grant Permissions requested for user/group: " 
+ grantStatement.getName());
+if (grantStatement.getSchemaName() != null) {
+grantPermLog.append(" for Schema: " + 
grantStatement.getSchemaName());
+} else if (grantStatement.getTableName() != null) {
+grantPermLog.append(" for Table: " + 
grantStatement.getTableName());
+}
+grantPermLog.append(" Permissions: " + 
Arrays.toString(grantStatement.getPermsList()));
+logger.info(grantPermLog.toString());
+
+HConnection hConnection = 
connection.getQueryServices().getAdmin().getConnection();
+
+try {
+if (grantStatement.getSchemaName() != null) {
+// SYSTEM.CATALOG doesn't have any entry for "default" 
HBase namespace, hence we will bypass the check
+
if(!grantStatement.getSchemaName().equals(QueryConstants.HBASE_DEFAULT_SCHEMA_NAME))
 {
+
FromCompiler.getResolverForSchema(grantStatement.getSchemaName(), connection);
+}
+grantPermissionsToSchema(hConnection, grantStatement);
+
+} else if (grantStatement.getTableName() != null) {
+PTable inputTable = PhoenixRuntime.getTable(connection,
+
SchemaUtil.normalizeFullTableName(grantStatement.getTableName().toString()));
+if (!(PTableType.TABLE.equals(inputTable.getType()) || 
PTableType.SYSTEM.equals(inputTable.getType( {
+throw new AccessDeniedException("Cannot GRANT 
permissions on INDEX TABLES or VIEWS");
+}
+grantPermissionsToTables(hConnection, grantStatement, 
inputTable);
+
+} else {
+grantPermissionsToUser(hConnection, grantStatement);
+}
+
+} catch (SQLException e) {
+// Bubble up the SQL Exception
+throw e;
+} catch (Throwable throwable) {
+// Wrap around other exceptions to PhoenixIOException (Ex: 
org.apache.hadoop.hbase.security.AccessDeniedException)
+throw ServerUtil.parseServerException(throwable);
+}
+
+return new MutationState(0, 0, connection);
+}
+
+private void grantPermissionsToTables(HConnection hConnection, 
GrantStatement grantStatement, PTable inputTable) throws Throwable {
+
+org.apache.hadoop.hbase.TableName tableName = 
SchemaUtil.getPhysicalTableName
+(inputTable.getName().getBytes(), 
inputTable.isNamespaceMapped());
+
+grantPermissionsToTable(hConnection, grantStatement, tableName);
+
+for(PTable indexTable : inputTable.getIndexes()) {
+// Local Indexes don't correspond to new physical table, they 
are just stored in separate CF of base table.
+if(indexTable.getIndexType().equals(IndexType.LOCAL)) {
+continue;
+}
+logger.info("Granting " + 
Arrays.toString(grantStatement.getPermsList()) +
+" perms to IndexTable: " + indexTable.getName() + " 
BaseTable: " + inputTable.getName());
+if (inputTable.isNamespaceMapped() != 
indexTable.isNamespaceMapped()) {
+throw new 
TablesNotInSyncException(inputTable.getTableName().getString(),
+indexTable.getTableName().getString(), "Namespace 
properties");
+}
+tableName = 
SchemaUtil.getPhysicalTableName(indexTable.getName().getBytes(), 
indexTable.isNamespaceMapped());
+grantPermissionsToTable(hConnection, grantStatement, 
tableName);
+}
+
+byte[] viewIndexTableBytes = 
MetaDataUtil.getViewIndexPhysicalName(inputTable.getPhysicalName().getBytes());
+tableName = 
org.apache.hadoop.hbase.TableName.valueOf(viewIndexTableBytes);
+boolean viewIndexTableExists = 
connection.getQueryServices().getAdmin().tableExists(tableName);
+if(!viewIndexTableExists && inputTable.isMul

[GitHub] phoenix pull request #283: PHOENIX-672 Add GRANT and REVOKE commands using H...

2017-11-21 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/283#discussion_r152477760
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java ---
@@ -4168,4 +4176,197 @@ public MutationState useSchema(UseSchemaStatement 
useSchemaStatement) throws SQL
 }
 return new MutationState(0, 0, connection);
 }
+
+public MutationState grantPermission(GrantStatement grantStatement) 
throws SQLException {
+
+StringBuffer grantPermLog = new StringBuffer();
+grantPermLog.append("Grant Permissions requested for user/group: " 
+ grantStatement.getName());
+if (grantStatement.getSchemaName() != null) {
+grantPermLog.append(" for Schema: " + 
grantStatement.getSchemaName());
+} else if (grantStatement.getTableName() != null) {
+grantPermLog.append(" for Table: " + 
grantStatement.getTableName());
+}
+grantPermLog.append(" Permissions: " + 
Arrays.toString(grantStatement.getPermsList()));
+logger.info(grantPermLog.toString());
+
+HConnection hConnection = 
connection.getQueryServices().getAdmin().getConnection();
+
+try {
+if (grantStatement.getSchemaName() != null) {
+// SYSTEM.CATALOG doesn't have any entry for "default" 
HBase namespace, hence we will bypass the check
+
if(!grantStatement.getSchemaName().equals(QueryConstants.HBASE_DEFAULT_SCHEMA_NAME))
 {
+
FromCompiler.getResolverForSchema(grantStatement.getSchemaName(), connection);
+}
+grantPermissionsToSchema(hConnection, grantStatement);
+
+} else if (grantStatement.getTableName() != null) {
+PTable inputTable = PhoenixRuntime.getTable(connection,
+
SchemaUtil.normalizeFullTableName(grantStatement.getTableName().toString()));
+if (!(PTableType.TABLE.equals(inputTable.getType()) || 
PTableType.SYSTEM.equals(inputTable.getType( {
+throw new AccessDeniedException("Cannot GRANT 
permissions on INDEX TABLES or VIEWS");
+}
+grantPermissionsToTables(hConnection, grantStatement, 
inputTable);
+
+} else {
+grantPermissionsToUser(hConnection, grantStatement);
+}
+
+} catch (SQLException e) {
+// Bubble up the SQL Exception
+throw e;
+} catch (Throwable throwable) {
+// Wrap around other exceptions to PhoenixIOException (Ex: 
org.apache.hadoop.hbase.security.AccessDeniedException)
+throw ServerUtil.parseServerException(throwable);
+}
+
+return new MutationState(0, 0, connection);
+}
+
+private void grantPermissionsToTables(HConnection hConnection, 
GrantStatement grantStatement, PTable inputTable) throws Throwable {
+
+org.apache.hadoop.hbase.TableName tableName = 
SchemaUtil.getPhysicalTableName
+(inputTable.getName().getBytes(), 
inputTable.isNamespaceMapped());
+
+grantPermissionsToTable(hConnection, grantStatement, tableName);
+
+for(PTable indexTable : inputTable.getIndexes()) {
+// Local Indexes don't correspond to new physical table, they 
are just stored in separate CF of base table.
+if(indexTable.getIndexType().equals(IndexType.LOCAL)) {
+continue;
+}
+logger.info("Granting " + 
Arrays.toString(grantStatement.getPermsList()) +
+" perms to IndexTable: " + indexTable.getName() + " 
BaseTable: " + inputTable.getName());
+if (inputTable.isNamespaceMapped() != 
indexTable.isNamespaceMapped()) {
+throw new 
TablesNotInSyncException(inputTable.getTableName().getString(),
+indexTable.getTableName().getString(), "Namespace 
properties");
+}
+tableName = 
SchemaUtil.getPhysicalTableName(indexTable.getName().getBytes(), 
indexTable.isNamespaceMapped());
+grantPermissionsToTable(hConnection, grantStatement, 
tableName);
+}
+
+byte[] viewIndexTableBytes = 
MetaDataUtil.getViewIndexPhysicalName(inputTable.getPhysicalName().getBytes());
+tableName = 
org.apache.hadoop.hbase.TableName.valueOf(viewIndexTableBytes);
+boolean viewIndexTableExists = 
connection.getQueryServices().getAdmin().tableExists(tableName);
+if(!viewIndexTableExists && inputTable.isMul

[GitHub] phoenix issue #268: PHOENIX-4010 Hash Join cache may not be send to all regi...

2017-09-08 Thread ankitsinghal
Github user ankitsinghal commented on the issue:

https://github.com/apache/phoenix/pull/268
  
Thanks @JamesRTaylor , have taken care the review comments in latest 
commit. Let me know if it is fine now


---


[GitHub] phoenix issue #268: PHOENIX-4010 Hash Join cache may not be send to all regi...

2017-09-07 Thread ankitsinghal
Github user ankitsinghal commented on the issue:

https://github.com/apache/phoenix/pull/268
  
Ping @JamesRTaylor 


---


[GitHub] phoenix issue #268: PHOENIX-4010 Hash Join cache may not be send to all regi...

2017-08-21 Thread ankitsinghal
Github user ankitsinghal commented on the issue:

https://github.com/apache/phoenix/pull/268
  
@JamesRTaylor , can you please review, I have removed the dependency on 
PHOENIX-3163. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #268: PHOENIX-4010 Hash Join cache may not be send to a...

2017-07-26 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/268#discussion_r129589111
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java ---
@@ -804,6 +804,7 @@ public Long getEstimatedBytesToScan() throws 
SQLException {
 

 // UPSERT SELECT run client-side
 
/
+
ScanUtil.setClientSideUpsertSelect(queryPlan.getContext().getScan());
--- End diff --

this change is actually a part of PHOENIX-3163, let me see if we can 
improve.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #268: PHOENIX-4010 Hash Join cache may not be send to a...

2017-07-21 Thread ankitsinghal
GitHub user ankitsinghal opened a pull request:

https://github.com/apache/phoenix/pull/268

PHOENIX-4010 Hash Join cache may not be send to all regionservers whe…

…n we have stale HBase meta cache

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ankitsinghal/phoenix PHOENIX-4010

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/268.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #268


commit ebf6dbf6a9a9d8bb408cec70f99f3fb5abc8ddcc
Author: Ankit Singhal 
Date:   2017-07-21T08:47:31Z

PHOENIX-4010 Hash Join cache may not be send to all regionservers when we 
have stale HBase meta cache




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix issue #261: PHOENIX-3944 ReadOnlyTableException occurs when we map P...

2017-06-26 Thread ankitsinghal
Github user ankitsinghal commented on the issue:

https://github.com/apache/phoenix/pull/261
  
Thanks @brfrn169, Now the changes look great. Let me commit this today or 
tomorrow.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #261: PHOENIX-3944 ReadOnlyTableException occurs when w...

2017-06-23 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/261#discussion_r123726523
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -1420,7 +1420,7 @@ public MetaDataMutationResult createTable(final 
List tableMetaData, fi
 byte[] tenantIdBytes = 
rowKeyMetadata[PhoenixDatabaseMetaData.TENANT_ID_INDEX];
 byte[] schemaBytes = 
rowKeyMetadata[PhoenixDatabaseMetaData.SCHEMA_NAME_INDEX];
 byte[] tableBytes = 
rowKeyMetadata[PhoenixDatabaseMetaData.TABLE_NAME_INDEX];
-byte[] tableName = physicalTableName != null ? physicalTableName : 
SchemaUtil.getTableNameAsBytes(schemaBytes, tableBytes);
+byte[] tableName = SchemaUtil.getTableNameAsBytes(schemaBytes, 
tableBytes);
--- End diff --

Why we need this change? I think it should be physical table name, we need 
it to verify meta data when views created on phoenix table (create view v as 
select * from table) in ensureTableCreated(tableName, tableType, tableProps, 
families, splits, true, isNamespaceMapped);



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #261: PHOENIX-3944 ReadOnlyTableException occurs when w...

2017-06-23 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/261#discussion_r123725378
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java ---
@@ -196,7 +196,8 @@ public static ColumnResolver 
getResolverForCreation(final CreateTableStatement s
 if (htable != null) Closeables.closeQuietly(htable);
 }
 tableNode = NamedTableNode.create(null, baseTable, 
statement.getColumnDefs());
-return new SingleTableColumnResolver(connection, 
tableNode, e.getTimeStamp(), new HashMap(1), false);
+boolean isNamespaceMapped = 
SchemaUtil.isNamespaceMappingEnabled(statement.getTableType(), 
connection.getQueryServices().getProps());
--- End diff --

I think it is necessary. How about asserting in the test that tables are 
mapped properly. You can like this.

assert(QueryUtil.getExplainPlan(conn.createStatement().executeQuery("explain 
select * from \""+ ns + "." + table+"\"")).contains(ns+"."+table));


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #:

2017-06-21 Thread ankitsinghal
Github user ankitsinghal commented on the pull request:


https://github.com/apache/phoenix/commit/26d7657f1c2b9921dcbd9c22ccf7d75e8ccf5bb7#commitcomment-22676582
  
In phoenix-core/src/main/java/org/apache/phoenix/util/CursorUtil.java:
In phoenix-core/src/main/java/org/apache/phoenix/util/CursorUtil.java on 
line 127:
can you please add a comment for each conditional path.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #:

2017-06-21 Thread ankitsinghal
Github user ankitsinghal commented on the pull request:


https://github.com/apache/phoenix/commit/26d7657f1c2b9921dcbd9c22ccf7d75e8ccf5bb7#commitcomment-22676567
  
In 
phoenix-core/src/main/java/org/apache/phoenix/iterate/CursorResultIterator.java:
In 
phoenix-core/src/main/java/org/apache/phoenix/iterate/CursorResultIterator.java 
on line 96:
code looks brittle here.  can you please refactor it and move this cache 
logic in different iterator


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #:

2017-06-21 Thread ankitsinghal
Github user ankitsinghal commented on the pull request:


https://github.com/apache/phoenix/commit/26d7657f1c2b9921dcbd9c22ccf7d75e8ccf5bb7#commitcomment-22676426
  
In 
phoenix-core/src/main/java/org/apache/phoenix/iterate/CursorResultIterator.java:
In 
phoenix-core/src/main/java/org/apache/phoenix/iterate/CursorResultIterator.java 
on line 122:
There seems too much duplication of code , please move them in methods.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #:

2017-06-21 Thread ankitsinghal
Github user ankitsinghal commented on the pull request:


https://github.com/apache/phoenix/commit/26d7657f1c2b9921dcbd9c22ccf7d75e8ccf5bb7#commitcomment-22676177
  
In 
phoenix-core/src/main/java/org/apache/phoenix/iterate/CursorResultIterator.java:
In 
phoenix-core/src/main/java/org/apache/phoenix/iterate/CursorResultIterator.java 
on line 43:
can't we use double linked list instead of two stacks to avoid data copy 
and better management


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #:

2017-06-21 Thread ankitsinghal
Github user ankitsinghal commented on the pull request:


https://github.com/apache/phoenix/commit/26d7657f1c2b9921dcbd9c22ccf7d75e8ccf5bb7#commitcomment-22674920
  
In 
phoenix-core/src/it/java/org/apache/phoenix/end2end/CursorFetchPriorIT.java:
In 
phoenix-core/src/it/java/org/apache/phoenix/end2end/CursorFetchPriorIT.java on 
line 307:
add test similar test case with FETCH NEXT 1000 (rows more than the table) 
and confirm FETCH PRIOR


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #:

2017-06-21 Thread ankitsinghal
Github user ankitsinghal commented on the pull request:


https://github.com/apache/phoenix/commit/26d7657f1c2b9921dcbd9c22ccf7d75e8ccf5bb7#commitcomment-22674824
  
In 
phoenix-core/src/it/java/org/apache/phoenix/end2end/CursorFetchPriorIT.java:
In 
phoenix-core/src/it/java/org/apache/phoenix/end2end/CursorFetchPriorIT.java on 
line 257:
please update all the test smaller cache size (like 2)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #:

2017-06-21 Thread ankitsinghal
Github user ankitsinghal commented on the pull request:


https://github.com/apache/phoenix/commit/c5644c521546efa69b10cf7719fa1c34d2aa12a9#commitcomment-22674761
  
In 
phoenix-core/src/it/java/org/apache/phoenix/end2end/CursorFetchPriorIT.java:
In 
phoenix-core/src/it/java/org/apache/phoenix/end2end/CursorFetchPriorIT.java on 
line 540:
Add a test case when first call itself is prior


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #:

2017-06-21 Thread ankitsinghal
Github user ankitsinghal commented on the pull request:


https://github.com/apache/phoenix/commit/26d7657f1c2b9921dcbd9c22ccf7d75e8ccf5bb7#commitcomment-22672590
  
In 
phoenix-core/src/it/java/org/apache/phoenix/end2end/CursorFetchPriorIT.java:
In 
phoenix-core/src/it/java/org/apache/phoenix/end2end/CursorFetchPriorIT.java on 
line 281:
please remove sys.out 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #:

2017-06-21 Thread ankitsinghal
Github user ankitsinghal commented on the pull request:


https://github.com/apache/phoenix/commit/26d7657f1c2b9921dcbd9c22ccf7d75e8ccf5bb7#commitcomment-22672273
  
In 
phoenix-core/src/main/java/org/apache/phoenix/execute/CursorFetchPlan.java:
In 
phoenix-core/src/main/java/org/apache/phoenix/execute/CursorFetchPlan.java on 
line 63:
read config in constructor 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #:

2017-06-21 Thread ankitsinghal
Github user ankitsinghal commented on the pull request:


https://github.com/apache/phoenix/commit/26d7657f1c2b9921dcbd9c22ccf7d75e8ccf5bb7#commitcomment-22672267
  
In phoenix-core/src/main/antlr3/PhoenixSQL.g:
In phoenix-core/src/main/antlr3/PhoenixSQL.g on line 774:
can't we use the same statement . something like this?
FETCH p=NEXT|p=PRIOR (a=NUMBER)? (ROW|ROWS)? FROM c=cursor_name {ret = 
factory.fetch(c,p!=null, a == null ? 1 :  Integer.parseInt( a.getText() )); }


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #:

2017-06-21 Thread ankitsinghal
Github user ankitsinghal commented on the pull request:


https://github.com/apache/phoenix/commit/26d7657f1c2b9921dcbd9c22ccf7d75e8ccf5bb7#commitcomment-22672204
  
In phoenix-core/src/main/java/org/apache/phoenix/util/CursorUtil.java:
In phoenix-core/src/main/java/org/apache/phoenix/util/CursorUtil.java on 
line 182:
Do we really need to catch exception here. It can be thrown from the caller 
right.
@Override
  public Tuple next() throws SQLException {


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #261: PHOENIX-3944 ReadOnlyTableException occurs when w...

2017-06-20 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/261#discussion_r123153895
  
--- Diff: phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java 
---
@@ -710,6 +714,27 @@ public void testCreateViewWithUpdateCacheFrquency() 
throws Exception {
   assertTrue(rs.next());
 }
 
+@Test
+public void 
testCreateViewMappedToExistingHbaseTableWithNamespaceMappingEnabled() throws 
Exception {
+Properties props = new Properties();
+props.setProperty(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, 
"true");
+Connection conn = DriverManager.getConnection(getUrl(),props);
+
+String ns = "NS_" + generateUniqueName();
+String table = "TABLE_" + generateUniqueName();
+String fam = "CF";
+
+HBaseAdmin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();
+admin.createNamespace(NamespaceDescriptor.create(ns).build());
+HTableDescriptor desc = new HTableDescriptor(TableName.valueOf(ns, 
table));
+desc.addFamily(new HColumnDescriptor(fam));
+admin.createTable(desc);
+
+conn.createStatement().execute("CREATE SCHEMA " + ns);
+conn.createStatement().execute("CREATE VIEW " + ns + "." + table + 
" "
++ "(PK VARCHAR PRIMARY KEY, " + fam + ".COL VARCHAR)");
--- End diff --

can you update a test to include a case where user has "NS.TBL" in default 
namespace and he wants to map a view.(\"NS.TBL\")(for backward compatibility).
```
desc = new HTableDescriptor(TableName.valueOf(ns+"."+table));
desc.addFamily(new HColumnDescriptor(fam));
admin.createTable(desc);

conn.createStatement().execute("CREATE VIEW \"" + ns + "." + table + "\""
+ "(PK VARCHAR PRIMARY KEY, " + fam + ".COL VARCHAR)");
```


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #261: PHOENIX-3944 ReadOnlyTableException occurs when w...

2017-06-20 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/261#discussion_r123154332
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/util/SchemaUtil.java ---
@@ -302,6 +302,10 @@ public static String getTableName(byte[] schemaName, 
byte[] tableName) {
 return getName(schemaName, tableName);
 }
 
+public static String getPhysicalName(String schemaName, String 
tableName) {
+return schemaName + QueryConstants.NAMESPACE_SEPARATOR + tableName;
+}
+
 public static String getColumnDisplayName(byte[] cf, byte[] cq) {
--- End diff --

we probably don't need this method after above changes. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #261: PHOENIX-3944 ReadOnlyTableException occurs when w...

2017-06-20 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/261#discussion_r123154290
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java ---
@@ -2135,7 +2135,7 @@ private PTable 
createTableInternal(CreateTableStatement statement, byte[][] spli
 // Upsert physical name for mapped view only if the full 
physical table name is different than the full table name
 // Otherwise, we end up with a self-referencing link and 
then cannot ever drop the view.
 if (viewType != ViewType.MAPPED
-|| 
!physicalNames.get(0).getString().equals(SchemaUtil.getTableName(schemaName, 
tableName))) {
+|| 
!physicalNames.get(0).getString().equals(SchemaUtil.getPhysicalName(schemaName, 
tableName))) {
--- End diff --

can you include isNamespaceMapped as table name with schema can also be in 
default namespace.
```
-|| 
!physicalNames.get(0).getString().equals(SchemaUtil.getTableName(schemaName, 
tableName))) {
+|| 
!physicalNames.get(0).getString().equals(SchemaUtil.getPhysicalTableName(
+SchemaUtil.getTableNameAsBytes(schemaName, 
tableName), isNamespaceMapped).getNameAsString())) {
```



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #261: PHOENIX-3944 ReadOnlyTableException occurs when w...

2017-06-20 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/261#discussion_r123154068
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java ---
@@ -196,7 +196,8 @@ public static ColumnResolver 
getResolverForCreation(final CreateTableStatement s
 if (htable != null) Closeables.closeQuietly(htable);
 }
 tableNode = NamedTableNode.create(null, baseTable, 
statement.getColumnDefs());
-return new SingleTableColumnResolver(connection, 
tableNode, e.getTimeStamp(), new HashMap(1), false);
+boolean isNamespaceMapped = 
SchemaUtil.isNamespaceMappingEnabled(statement.getTableType(), 
connection.getQueryServices().getProps());
--- End diff --


To handle view mapped \"NS.TBL\"(without schema)

```
-byte[] fullTableName = SchemaUtil.getPhysicalName(
-
SchemaUtil.getTableNameAsBytes(baseTable.getSchemaName(), 
baseTable.getTableName()),
-
connection.getQueryServices().getProps()).getName();
+byte[] fullTableName = SchemaUtil
+
.getPhysicalName(Bytes.toBytes(baseTable.toString()),
+connection.getQueryServices().getProps())
+.getName();
```



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix issue #229: PHOENIX-3572 Support FETCH NEXT|n ROWS query on cursor

2017-05-16 Thread ankitsinghal
Github user ankitsinghal commented on the issue:

https://github.com/apache/phoenix/pull/229
  
Looks good to me as well. Thanks @bijugs,  for working on this. Let me 
commit this for you.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #229: PHOENIX-3572 Support FETCH NEXT|n ROWS query on c...

2017-02-12 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/229#discussion_r100738871
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/util/CursorUtil.java ---
@@ -0,0 +1,203 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.util;
+
+import java.sql.Connection;
+import java.sql.SQLException;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.phoenix.compile.QueryPlan;
+import org.apache.phoenix.compile.OrderByCompiler.OrderBy;
+import org.apache.phoenix.execute.CursorFetchPlan;
+import org.apache.phoenix.iterate.CursorResultIterator;
+import org.apache.phoenix.parse.CloseStatement;
+import org.apache.phoenix.parse.DeclareCursorStatement;
+import org.apache.phoenix.parse.OpenStatement;
+import org.apache.phoenix.schema.tuple.Tuple;
+
+public final class CursorUtil {
+
+private static class CursorWrapper {
+private final String cursorName;
+private final String selectSQL;
+private boolean isOpen = false;
+QueryPlan queryPlan;
+ImmutableBytesWritable row;
+ImmutableBytesWritable previousRow;
+private Scan scan;
+private boolean moreValues=true;
+private boolean isReversed;
+private boolean islastCallNext;
+private CursorFetchPlan fetchPlan;
+private int offset = -1;
+
+private CursorWrapper(String cursorName, String selectSQL, 
QueryPlan queryPlan){
+this.cursorName = cursorName;
+this.selectSQL = selectSQL;
+this.queryPlan = queryPlan;
+this.islastCallNext = true;
+this.fetchPlan = new CursorFetchPlan(queryPlan);
+}
+
+private synchronized void openCursor(Connection conn) throws 
SQLException {
+if(isOpen){
+return;
+}
+this.scan = this.queryPlan.getContext().getScan();
+
isReversed=OrderBy.REV_ROW_KEY_ORDER_BY.equals(this.queryPlan.getOrderBy());
+isOpen = true;
+}
+
+private void closeCursor() throws SQLException {
+isOpen = false;
+((CursorResultIterator) fetchPlan.iterator()).closeCursor();
+//TODO: Determine if the cursor should be removed from the 
HashMap at this point.
+//Semantically it makes sense that something which is 'Closed' 
one should be able to 'Open' again.
+mapCursorIDQuery.remove(this.cursorName);
+}
+
+private QueryPlan getFetchPlan(boolean isNext, int fetchSize) 
throws SQLException {
+if (!isOpen)
+throw new SQLException("Fetch call on closed cursor '" + 
this.cursorName + "'!");
+
((CursorResultIterator)fetchPlan.iterator()).setFetchSize(fetchSize);
+if (!queryPlan.getStatement().isAggregate() || 
!queryPlan.getStatement().isDistinct()) { 
+   if (islastCallNext != isNext) {
+if (islastCallNext && !isReversed){
+   ScanUtil.setReversed(scan);
+} else {
+   ScanUtil.unsetReversed(scan);
+}
--- End diff --

this code seems to be for reverse/prior and belongs to another JIRA. can we 
remove this if it can affect the functionality?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #229: PHOENIX-3572 Support FETCH NEXT|n ROWS query on c...

2017-02-12 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/229#discussion_r100738646
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/execute/CursorFetchPlan.java ---
@@ -0,0 +1,87 @@
+package org.apache.phoenix.execute;
+
+import java.sql.ParameterMetaData;
+import java.sql.SQLException;
+import java.util.List;
+import java.util.Set;
+
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.phoenix.compile.ExplainPlan;
+import org.apache.phoenix.compile.GroupByCompiler.GroupBy;
+import org.apache.phoenix.compile.OrderByCompiler.OrderBy;
+import org.apache.phoenix.compile.QueryPlan;
+import org.apache.phoenix.compile.RowProjector;
+import org.apache.phoenix.compile.StatementContext;
+import org.apache.phoenix.iterate.CursorResultIterator;
+import org.apache.phoenix.iterate.ParallelScanGrouper;
+import org.apache.phoenix.iterate.ResultIterator;
+import org.apache.phoenix.jdbc.PhoenixStatement.Operation;
+import org.apache.phoenix.parse.FilterableStatement;
+import org.apache.phoenix.query.KeyRange;
+import org.apache.phoenix.schema.TableRef;
+
+public class CursorFetchPlan extends DelegateQueryPlan {
+
+   //QueryPlan cursorQueryPlan;
+   private CursorResultIterator resultIterator;
+   private int fetchSize;
+
+   public CursorFetchPlan(QueryPlan cursorQueryPlan) {
+   super(cursorQueryPlan);
+   }
+
+
+   @Override
+   public ResultIterator iterator() throws SQLException {
+   // TODO Auto-generated method stub
+   StatementContext context = delegate.getContext();
+   if (resultIterator != null) {
+   return resultIterator;
+   } else {
+   context.getOverallQueryMetrics().startQuery();
+   resultIterator = (CursorResultIterator) 
delegate.iterator();
+   return resultIterator;
+   }
+   }
+
+   @Override
+   public ResultIterator iterator(ParallelScanGrouper scanGrouper) throws 
SQLException {
+   // TODO Auto-generated method stub
+   StatementContext context = delegate.getContext();
+   if (resultIterator != null) {
+   return resultIterator;
+   } else {
+   context.getOverallQueryMetrics().startQuery();
+   resultIterator = (CursorResultIterator) 
delegate.iterator(scanGrouper);
+   return resultIterator;
+   }
+   }
+
+   @Override
+   public ResultIterator iterator(ParallelScanGrouper scanGrouper, Scan 
scan) throws SQLException {
+   // TODO Auto-generated method stub
--- End diff --

can you merge these iterators as all are doing the same and base class will 
be calling iterator(ParallelScanGrouper scanGrouper, Scan scan) internally from 
other overloaded methods with special parameter value.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-12-07 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r91459367
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/IndexTool.java ---
@@ -167,50 +180,152 @@ private void printHelpAndExit(Options options, int 
exitCode) {
 formatter.printHelp("help", options);
 System.exit(exitCode);
 }
+
+class JobFactory {
+Connection connection;
+Configuration configuration;
+private Path outputPath;
 
-@Override
-public int run(String[] args) throws Exception {
-Connection connection = null;
-try {
-CommandLine cmdLine = null;
-try {
-cmdLine = parseOptions(args);
-} catch (IllegalStateException e) {
-printHelpAndExit(e.getMessage(), getOptions());
+public JobFactory(Connection connection, Configuration 
configuration, Path outputPath) {
+this.connection = connection;
+this.configuration = configuration;
+this.outputPath = outputPath;
+
+}
+
+public Job getJob(String schemaName, String indexTable, String 
dataTable, boolean useDirectApi) throws Exception {
+if (indexTable == null) {
+return configureJobForPartialBuild(schemaName, dataTable);
+} else {
+return configureJobForAysncIndex(schemaName, indexTable, 
dataTable, useDirectApi);
 }
-final Configuration configuration = 
HBaseConfiguration.addHbaseResources(getConf());
-final String schemaName = 
cmdLine.getOptionValue(SCHEMA_NAME_OPTION.getOpt());
-final String dataTable = 
cmdLine.getOptionValue(DATA_TABLE_OPTION.getOpt());
-final String indexTable = 
cmdLine.getOptionValue(INDEX_TABLE_OPTION.getOpt());
+}
+
+private Job configureJobForPartialBuild(String schemaName, String 
dataTable) throws Exception {
 final String qDataTable = 
SchemaUtil.getQualifiedTableName(schemaName, dataTable);
-final String qIndexTable = 
SchemaUtil.getQualifiedTableName(schemaName, indexTable);
-
+final PTable pdataTable = PhoenixRuntime.getTable(connection, 
qDataTable);
 connection = ConnectionUtil.getInputConnection(configuration);
-if (!isValidIndexTable(connection, qDataTable, indexTable)) {
-throw new IllegalArgumentException(String.format(
-" %s is not an index table for %s ", qIndexTable, 
qDataTable));
+long minDisableTimestamp = HConstants.LATEST_TIMESTAMP;
+PTable indexWithMinDisableTimestamp = null;
+
+//Get Indexes in building state, minDisabledTimestamp 
+List disableIndexes = new ArrayList();
+List disabledPIndexes = new ArrayList();
+for (PTable index : pdataTable.getIndexes()) {
+if (index.getIndexState().equals(PIndexState.BUILDING)) {
+disableIndexes.add(index.getTableName().getString());
+disabledPIndexes.add(index);
+if (minDisableTimestamp > 
index.getIndexDisableTimestamp()) {
+minDisableTimestamp = 
index.getIndexDisableTimestamp();
+indexWithMinDisableTimestamp = index;
+}
+}
+}
+
+if (indexWithMinDisableTimestamp == null) {
+throw new Exception("There is no index for a datatable to 
be rebuild:" + qDataTable);
 }
+if (minDisableTimestamp == 0) {
+throw new Exception("It seems Index " + 
indexWithMinDisableTimestamp
++ " has disable timestamp as 0 , please run 
IndexTool with IndexName to build it first");
+// TODO probably we can initiate the job by ourself or can 
skip them while making the list for partial build with a warning
+}
+
+long maxTimestamp = getMaxRebuildAsyncDate(schemaName, 
disableIndexes);
+
+//serialize index maintaienr in job conf with Base64 TODO: 
Need to find better way to serialize them in conf.
+List maintainers = 
Lists.newArrayListWithExpectedSize(disabledPIndexes.size());
+for (PTable index : disabledPIndexes) {
+maintainers.add(index.getIndexMaintainer(pdataTable, 
connection.unwrap(PhoenixConnection.class)));
+}
+ImmutableBytesWr

[GitHub] phoenix issue #210: PHOENIX-2890 Extend IndexTool to allow incremental index...

2016-12-07 Thread ankitsinghal
Github user ankitsinghal commented on the issue:

https://github.com/apache/phoenix/pull/210
  
Thank you so much @chrajeshbabu for the review, Raised PHOENIX-3525 for the 
same.
@JamesRTaylor  ,do you want to take a look or it's ok to commit now?



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-12-07 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r91454910
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/IndexTool.java ---
@@ -85,7 +102,7 @@
 private static final Option DATA_TABLE_OPTION = new Option("dt", 
"data-table", true,
 "Data table name (mandatory)");
 private static final Option INDEX_TABLE_OPTION = new Option("it", 
"index-table", true,
-"Index table name(mandatory)");
+"Index table name(not required in case of partial 
rebuilding)");
--- End diff --

This was taken care in the last commit.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-12-07 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r91454867
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
 ---
@@ -385,6 +383,8 @@ private void updateTable(Connection conn, String 
tableName) throws SQLException
 }
 
 public static class FailingRegionObserver extends SimpleRegionObserver 
{
+public static volatile boolean FAIL_WRITE = false;
--- End diff --

This was taken care in the last commit.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-12-07 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r91454865
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexMetadataIT.java 
---
@@ -216,6 +219,15 @@ public void testIndexCreateDrop() throws Exception {
 assertFalse(rs.next());
 
 assertActiveIndex(conn, INDEX_DATA_SCHEMA, indexDataTable);
+
+ddl = "ALTER INDEX " + indexName + " ON " + INDEX_DATA_SCHEMA 
+ QueryConstants.NAME_SEPARATOR + indexDataTable + " REBUILD ASYNC";
+conn.createStatement().execute(ddl);
+// Verify the metadata for index is correct.
+rs = conn.getMetaData().getTables(null, 
StringUtil.escapeLike(INDEX_DATA_SCHEMA), indexName , new String[] 
{PTableType.INDEX.toString()});
+assertTrue(rs.next());
+assertEquals(indexName , rs.getString(3));
+assertEquals(PIndexState.BUILDING.toString(), 
rs.getString("INDEX_STATE"));
--- End diff --

This was taken care in the last commit.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-11-16 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r88209639
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForPartialBuildWithNamespaceEnabled.java
 ---
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Map;
+
+import org.apache.hadoop.hbase.HConstants;
+import 
org.apache.phoenix.end2end.index.MutableIndexFailureIT.FailingRegionObserver;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.query.QueryServicesOptions;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.junit.BeforeClass;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameters;
+
+import com.google.common.collect.Maps;
+
+/**
+ * Tests for the {@link IndexToolForPartialBuildWithNamespaceEnabled}
+ */
+@RunWith(Parameterized.class)
+public class IndexToolForPartialBuildWithNamespaceEnabled extends 
IndexToolForPartialBuildIT {
+
+
+public IndexToolForPartialBuildWithNamespaceEnabled(boolean 
localIndex, boolean isNamespaceEnabled) {
+super(localIndex);
+this.isNamespaceEnabled=isNamespaceEnabled;
+}
+
+@BeforeClass
+public static void doSetup() throws Exception {
+Map serverProps = 
Maps.newHashMapWithExpectedSize(7);
+serverProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, 
QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
+serverProps.put("hbase.coprocessor.region.classes", 
FailingRegionObserver.class.getName());
+serverProps.put(HConstants.HBASE_CLIENT_RETRIES_NUMBER, "2");
+serverProps.put(HConstants.HBASE_RPC_TIMEOUT_KEY, "1");
+serverProps.put("hbase.client.pause", "5000");
+
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_BATCH_SIZE_ATTRIB, 
"2000");
+
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_INTERVAL_ATTRIB, 
"1000");
+serverProps.put(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, 
"true");
+Map clientProps = 
Maps.newHashMapWithExpectedSize(1);
+clientProps.put(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, 
"true");
+setUpTestDriver(new 
ReadOnlyProps(serverProps.entrySet().iterator()), new 
ReadOnlyProps(clientProps.entrySet().iterator()));
+}
+
+@Parameters(name="localIndex = {0} , isNamespaceEnabled = {1}")
+public static Collection data() {
+return Arrays.asList(new Boolean[][] { 
+ { false, true},{ true, false }
--- End diff --

As discussed with you, it's difficult to do .. As mini cluster can be 
shared across test and in parallel environment restarting a cluster may fail 
other tests.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #211: PHOENIX-3254 IndexId Sequence is incremented even...

2016-09-27 Thread ankitsinghal
GitHub user ankitsinghal opened a pull request:

https://github.com/apache/phoenix/pull/211

PHOENIX-3254 IndexId Sequence is incremented even if index exists alr…

…eady

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ankitsinghal/phoenix PHOENIX-3254

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/211.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #211


commit 5268a90d38c21e8f93b8b2782f2d9e146642a463
Author: Ankit Singhal 
Date:   2016-09-27T09:29:11Z

PHOENIX-3254 IndexId Sequence is incremented even if index exists already




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #210: PHOENIX 2890 Extend IndexTool to allow incrementa...

2016-09-27 Thread ankitsinghal
GitHub user ankitsinghal opened a pull request:

https://github.com/apache/phoenix/pull/210

PHOENIX 2890 Extend IndexTool to allow incremental index rebuilds



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ankitsinghal/phoenix PHOENIX-2890

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/210.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #210


commit 5cbf908184900e3708f5cf9471a529b8cf579e97
Author: Ankit Singhal 
Date:   2016-09-27T09:25:59Z

PHOENIX 2890 Extend IndexTool to allow incremental index rebuilds




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #153: PHOENIX-1311 HBase namespaces surfaced in phoenix

2016-09-14 Thread ankitsinghal
Github user ankitsinghal closed the pull request at:

https://github.com/apache/phoenix/pull/153


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix issue #153: PHOENIX-1311 HBase namespaces surfaced in phoenix

2016-09-14 Thread ankitsinghal
Github user ankitsinghal commented on the issue:

https://github.com/apache/phoenix/pull/153
  
This pull request was already merged and feature is available v4.8.0


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix issue #192: Phoenix Cursors implementation

2016-09-06 Thread ankitsinghal
Github user ankitsinghal commented on the issue:

https://github.com/apache/phoenix/pull/192
  
@anirudha , thanks for the pull request. Is it possible for you to change 
the JIRA subject to include PHOENIX-2606 or link somehow, so that we can see 
the comment logged on JIRA as well.


With current implementation,  every query is modified to include primary 
key columns and tries to use RVC, which may produce improper results in 
following cases and also sometimes there is no advantage in using it as they 
don’t limit scan.

- duplicate values for a column
- order by on non-primary key axis.
- Primary key columns having null value.
- Aggregate queries


@samarthjain can you please confirm as per your observation for what all 
queries RVC cannot be used?

@anirudha 
To abstract the query complexities and for initial support of Cursors, we 
should not modify any query but instead we can keep the ResultSet object open 
for corresponding cursor(with timeout) and start caching rows as we proceed 
further with next() calls(FETCH NEXT FROM cursor) , the cache will be used for 
previous() calls(FETCH PRIOR FROM cusror) on resultSet. (cache will also be 
used for next() calls if we go previous() in the cache).​

Pros/Cons of above approach:-

Pros:-

- Highly abstracted, We don’t need to understand each and every query and 
develop logic separately for them.
- As per the current implementation, We don’t need to expend ORDER BY(on 
non-primary key axis) all the time to include primary key column for 
uniqueness. As this will cause problem at the server because we will have more 
keys(almost all keys) to sort every time. (RVC will not restrict in this case).
- Cache size can also be limited by the user and if we exhaust the cache , 
cache can be updated by using re-runing the query with LIMIT+OFFSET only 
- An optimization can be done for flat queries(including INDEX queries) 
using last and peeked SCAN keys(instead of RVC to handle null and duplicate 
properly) for updating the cache instead of LIMIT+OFFSET.
- Snapshot/Static queries can be provided by storing the compile time of 
OPEN CURSOR (and can be used to limit the scan upper bound for timestamp with 
it).

Cons:-

- As previous() is not supported on the server(Hbase), so cache overhead is 
there to maintain the results to support previous() at client.
- Re-calculation of results after we reach cache limit or scanner timeout.


@JamesRTaylor , WDYT? any suggestions


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #192: Phoenix Cursors implementation

2016-09-06 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/192#discussion_r77613518
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/util/CursorUtil.java ---
@@ -247,11 +269,11 @@ private void updateSelectValues(Tuple currentRow) 
throws SQLException {
 PDataType type = 
projector.getExpression().getDataType();
 Object value = projector.getValue(currentRow, type, 
new ImmutableBytesPtr());
 if(value == null){
-whereExpressionNext = 
whereExpressionNext.replaceFirst(":"+Integer.toString(colBinding),"NULL");
-whereExpressionPrior = 
whereExpressionPrior.replaceFirst(":"+Integer.toString(colBinding),"NULL");
+whereExpressionNext = 
whereExpressionNext.replaceFirst("::"+Integer.toString(colBinding),"NULL");
+whereExpressionPrior = 
whereExpressionPrior.replaceFirst("::"+Integer.toString(colBinding),"NULL");
 } else{
-whereExpressionNext = 
whereExpressionNext.replaceFirst(":"+Integer.toString(colBinding),formatString(value,
 type));
-whereExpressionPrior = 
whereExpressionPrior.replaceFirst(":"+Integer.toString(colBinding),formatString(value,
 type));
+whereExpressionNext = 
whereExpressionNext.replaceFirst("::"+Integer.toString(colBinding),formatString(value,
 type));
+whereExpressionPrior = 
whereExpressionPrior.replaceFirst("::"+Integer.toString(colBinding),formatString(value,
 type));
--- End diff --

I don't think , we need to modify queries for the this support.   You may 
need to remove this code.(And instead of replacing why we can't use prepared 
statements directly).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #192: Phoenix Cursors implementation

2016-09-06 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/192#discussion_r77612514
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/util/CursorUtil.java ---
@@ -62,24 +64,44 @@
 private boolean selectHasPKCol = false;
 
 private CursorWrapper(String cursorName, String selectSQL, 
List orderBy){
+this(cursorName, selectSQL, orderBy, false);
+}
+
+private CursorWrapper(String cursorName, String selectSQL, 
List orderBy, boolean isCursorStatic){
 this.cursorName = cursorName;
 this.orderBy = orderBy;
 this.selectSQL = selectSQL;
 this.listSelectColNames = new 
ArrayList(java.util.Arrays.asList(selectSQL
 .substring(selectSQL.indexOf("SELECT") + 7, 
selectSQL.indexOf("FROM")).trim()
 .replaceAll("[()]", "").toUpperCase().split(",")));
+this.isCursorStatic = isCursorStatic;
 }
 
 private synchronized void openCursor(Connection conn) throws 
SQLException {
 if(isOpen){
 return;
 }
-QueryPlan plan = 
conn.prepareStatement(selectSQL).unwrap(PhoenixPreparedStatement.class).compileQuery();
-List listPKColNames = new 
ArrayList(Arrays.asList(plan.getTableRef().getTable()
-.getPKColumns().toString().replaceAll("[\\[ \\]]", 
"").toUpperCase().split(",")));
 StringBuilder whereBuilder = new StringBuilder(" WHERE (");
 StringBuilder orderByBuilder = new StringBuilder(" ORDER BY ");
 StringBuilder selectBuilder = new 
StringBuilder(listSelectColNames.toString().replaceAll("[\\[ \\]]", ""));
+
+QueryPlan plan = 
conn.prepareStatement(selectSQL).unwrap(PhoenixPreparedStatement.class).compileQuery();
+PTable table = plan.getTableRef().getTable();
+//Process static cursor option
+if(isCursorStatic){
+int columnIndex = table.getRowTimestampColPos();
+if(columnIndex == -1) throw new SQLException("Cursor " + 
cursorName + " declared as STATIC, " +
+"but target table does not contain a ROW_TIMESTAMP 
column!");
+
+String columnName = 
table.getColumns().get(columnIndex).getName().getString();
+whereBuilder.append(columnName).append(") <= (");
+String timestampFilter = 
"TO_DATE('"+DateUtil.getDateFormatter(DateUtil.DEFAULT_DATE_FORMAT).format(new 
java.sql.Date(System.currentTimeMillis()))+"')";
+whereBuilder.append(timestampFilter).append(") AND (");
+}
+
+List listPKColNames = new 
ArrayList(Arrays.asList(table.getPKColumns().toString().replaceAll("[\\[
 \\]]", "").toUpperCase().split(",")));
+
+//Process ORDER BY expressions in the internal select statement
--- End diff --

By static cursor you mean that subsequent fetches on the cursor should see 
the new data upserted after the cursor is open? 
If that so then we don't need to have timestamp filter for it, we can use 
scan.setTimeRange(0, CURRENT_SCN!=null ?  CURRENT_SCN, 
compileTimeORCursorOpenTime));


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #192: Phoenix Cursors implementation

2016-09-06 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/192#discussion_r77612036
  
--- Diff: phoenix-core/src/main/antlr3/PhoenixSQL.g ---
@@ -726,8 +727,11 @@ upsert_column_refs returns 
[Pair,List> ret]
 
 // Parse a full declare cursor expression structure.
 declare_cursor_node returns [DeclareCursorStatement ret]
-:DECLARE c=cursor_name CURSOR FOR s=select_node
-{ret = factory.declareCursor(c, s); }
+@init{boolean isStatic = false;}
+:   DECLARE c=cursor_name CURSOR
+(STATIC {isStatic = true;})?
--- End diff --

here it can be replaced with, (static=STATIC)? and use 
factory.declareCursor(c,s static!=null)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2862 Do client server compatibility ...

2016-05-02 Thread ankitsinghal
GitHub user ankitsinghal opened a pull request:

https://github.com/apache/phoenix/pull/167

PHOENIX-2862 Do client server compatibility checks before upgrading system 
tables



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ankitsinghal/phoenix PHOENIX-2862

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/167.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #167


commit 74f15dcfd4faba4c1aa1d4624fda407298460801
Author: Ankit Singhal 
Date:   2016-05-02T09:06:38Z

PHOENIX-2862 Do client server compatibility checks before upgrading system 
tables




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: IndexFailed when data table name contains ":...

2016-04-27 Thread ankitsinghal
Github user ankitsinghal commented on the pull request:

https://github.com/apache/phoenix/pull/166#issuecomment-215305552
  
Yes, I have already fixed it as a part of my work on PHOENIX-2862.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: IndexFailed when data table name contains ":...

2016-04-27 Thread ankitsinghal
Github user ankitsinghal commented on the pull request:

https://github.com/apache/phoenix/pull/166#issuecomment-215140554
  
can you try with this, use (singlequote,doublequote,doublequote) to escape 
double quotes on command line.
bin/hbase org.apache.phoenix.mapreduce.index.IndexTool --data-table 
'""raw:table""' --index-table INDEX --output-path hdfs://ip:port/folder

It should work with version prior to current master.(as master has 
namespace mapping feature which requires some changes to recognize the mapped 
tables and escape properly . need modification in  
SchemaUtil.getTableNameFromFullName and getSchemaNameFromFullName ).



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: IndexFailed when data table name contains ":...

2016-04-25 Thread ankitsinghal
Github user ankitsinghal commented on the pull request:

https://github.com/apache/phoenix/pull/166#issuecomment-214250448
  
Are you getting the same exception?
was the raw:table an existing hbase table and you have created a phoenix 
view/table over it?
is the table name raw:table or RAW:TABLE?



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: IndexFailed when data table name contains ":...

2016-04-25 Thread ankitsinghal
Github user ankitsinghal commented on the pull request:

https://github.com/apache/phoenix/pull/166#issuecomment-214203754
  
Have you tried passing table name with quotes
bin/hbase org.apache.phoenix.mapreduce.index.IndexTool --data-table 
"\"raw:table\"" --index-table INDEX --output-path hdfs://ip:port/folder


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1311 HBase namespaces surfaced in ph...

2016-04-13 Thread ankitsinghal
Github user ankitsinghal commented on the pull request:

https://github.com/apache/phoenix/pull/153#issuecomment-209773123
  
I have done the changes in last commit.
Thanks you so much @samarthjain and @JamesRTaylor for taking time and 
reviewing it. I'll just get this committed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1311 HBase namespaces surfaced in ph...

2016-04-13 Thread ankitsinghal
Github user ankitsinghal commented on the pull request:

https://github.com/apache/phoenix/pull/153#issuecomment-209525622
  
@JamesRTaylor , I am done with the changes in last commit. so is it good 
for commit now?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1311 HBase namespaces surfaced in ph...

2016-04-13 Thread ankitsinghal
Github user ankitsinghal commented on the pull request:

https://github.com/apache/phoenix/pull/153#issuecomment-209304973
  
@JamesRTaylor ,

> There are likely lots of issues if isNamespaceMapping config property 
changes from true to false, no? 

There will not be any problem if 
**phoenix.connection.isNamespaceMappingEnabled** is set from true to false 
unless the system tables are migrated after setting 
**"phoenix.connection.mapSystemTablesToNamespace"** to true as it will migrate 
the SYSTEM tables to SYSTEM namespace and it is need to be set at client and 
server both to have IndexFailure, stats collection to work correctly which 
directly refers system tables at server. Let me know if name of the properties 
needs to be change to depict it as server and client side property.

>  Wouldn't existing namespace mapped tables not be found if it was changed 
from true to false after tables had been created?

Tables mapped will still be accessible. Although old client( What would be the purpose of allowing a schema (namespace) to be created 
if the feature is off?If none, then it's probably best to give an error message 
if isNamespaceMapping is off and CREATE SCHEMA is used.

Yes, I think we can through exception during CREATE SCHEMA only if the 
property is not set .
but other constructs (like USE ) should still be allowed as mapped 
tables are accessible even isNamespaceMappingEnabled is set to false.



 










---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1311 HBase namespaces surfaced in ph...

2016-04-13 Thread ankitsinghal
Github user ankitsinghal commented on the pull request:

https://github.com/apache/phoenix/pull/153#issuecomment-209296286
  
@samarthjain 

> Will the table T be created in the namespace S? If the feature is 
disabled, it shouldn't be. Would you mind adding a test for this if it hasn't 
been already?

Yes, table T will not be created in namespace S if isNamespaceMapped is 
disabled. Test is there in CreateTableIT#testCreateTable

> Also, how about if we do a CREATE TABLE SCHEMA.TABLENAME with the feature 
on. Does it end up creating the namespace named SCHEMA first and then maps the 
table to that namespace

If features is on and SCHEMA is not already present it will fail with 
SchemaNotFoundException. User has to create a schema first and then he can 
create a table if the config is set to true.
but when config is set to false, it will create a table with name 
SCHEMA.TABLENAME in default namespace without throwing any 
SchemaNotFoundException.
I have added a test(CreateTableIT#testCreateTableWithoutSchema) in the 
latest commit for the same.











---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1311 HBase namespaces surfaced in ph...

2016-04-12 Thread ankitsinghal
Github user ankitsinghal commented on the pull request:

https://github.com/apache/phoenix/pull/153#issuecomment-208944403
  
@samarthjain , thanks for the review. 
I'm unsure about disabling schema constructs if isNamespaceMapping is not 
enabled in config. As this property is client side property and even if it is 
unset , user can still access the table mapped to namespace created by another 
client which has this property set. This property just ensure schema mapping to 
namespace during creation of table.

I think these constructs still make sense independently. but @JamesRTaylor 
/ @samarthjain , if you guys think of disabling it , I'm ok with that too.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1311 HBase namespaces surfaced in ph...

2016-04-12 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/153#discussion_r59387189
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java ---
@@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface 
metaTable, HTableInterface s
 }
 return false;
 }
+
+public static void mapTableToNamespace(HBaseAdmin admin, 
HTableInterface metatable, String srcTableName,
+String destTableName, ReadOnlyProps props, Long ts, String 
phoenixTableName, PTableType pTableType)
+throws SnapshotCreationException, 
IllegalArgumentException, IOException, InterruptedException,
+SQLException {
+srcTableName = SchemaUtil.normalizeIdentifier(srcTableName);
+if (!SchemaUtil.isNamespaceMappingEnabled(
+SchemaUtil.isSystemTable(srcTableName.getBytes()) ? 
PTableType.SYSTEM : null,
+props)) { throw new 
IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes())
+? "For system table " + 
QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE
++ " also needs to be enabled along with " 
+ QueryServices.IS_NAMESPACE_MAPPING_ENABLED
+: QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " 
is not enabled"); }
+
+if (PTableType.TABLE.equals(pTableType) || 
PTableType.INDEX.equals(pTableType)) {
+admin.snapshot(srcTableName, srcTableName);
+admin.cloneSnapshot(srcTableName.getBytes(), 
destTableName.getBytes());
+admin.disableTable(srcTableName);
--- End diff --

ok.. I have added a code to delete the snapshot.

Re-trying from the phoenix snapshot would be risky.. if let's say, upgrade 
got failed and user keep on using the un-mapped table for a while and then he 
again go for upgrade, the snapshot taken in last upgrade will be obsolete. 




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1311 HBase namespaces surfaced in ph...

2016-04-12 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/153#discussion_r59386509
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/PTableKey.java ---
@@ -26,7 +28,8 @@
 public PTableKey(PName tenantId, String name) {
 Preconditions.checkNotNull(name);
 this.tenantId = tenantId;
-this.name = name;
+this.name = !name.contains(QueryConstants.NAMESPACE_SEPARATOR) ? 
name
--- End diff --

agreed.. removed in latest commit.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2722 support mysql limit,offset clau...

2016-04-12 Thread ankitsinghal
Github user ankitsinghal closed the pull request at:

https://github.com/apache/phoenix/pull/154


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1311 HBase namespaces surfaced in ph...

2016-04-11 Thread ankitsinghal
Github user ankitsinghal commented on the pull request:

https://github.com/apache/phoenix/pull/153#issuecomment-208446625
  
@samarthjain , added upgrade util test case too.. please look at the 
replies on your review comments and let me know if they are fine.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1311 HBase namespaces surfaced in ph...

2016-04-08 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/153#discussion_r59075136
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java ---
@@ -265,12 +332,18 @@ public SingleTableColumnResolver(PhoenixConnection 
connection, NamedTableNode ta
if (def.getColumnDefName().getFamilyName() != null) {
families.add(new 
PColumnFamilyImpl(PNameFactory.newName(def.getColumnDefName().getFamilyName()),Collections.emptyList()));
}
-   }
-   Long scn = connection.getSCN();
-   PTable theTable = new PTableImpl(connection.getTenantId(), 
table.getName().getSchemaName(), table.getName().getTableName(), scn == null ? 
HConstants.LATEST_TIMESTAMP : scn, families);
+}
+Long scn = connection.getSCN();
+String schema = table.getName().getSchemaName();
+if (connection.getSchema() != null) {
--- End diff --

The code in discussion will be used during **creation** of **mapped views** 
only .. It will never be used anywhere else.
So when user provide table name without schema then connection schema 
should be used if set and if user is providing tablename with schema then we 
should ignore connection schema. I think this is how most of the databases work.

for eg:- connection schema is set to 'S' (`phoenix> USE 'S'`)
so , if user create mapped view,
`create view A.T(pk ...)`. //then this will map to A.T table only
`create view T(pk..)` // //then this will map to S.T table

And to map table to default schema (User can unset connection schema back 
to null by using `USE DEFAULT`)
create view T(pk..) // //then this will map to T table

May be I'm missing something, would you mind giving some examples in terms 
of sql statement.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1311 HBase namespaces surfaced in ph...

2016-04-08 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/153#discussion_r58994358
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/tx/TxCheckpointIT.java ---
@@ -96,6 +97,7 @@ public void testUpsertSelectDoesntSeeUpsertedData() 
throws Exception {
 Connection conn = DriverManager.getConnection(getUrl(), props);
 conn.setAutoCommit(true);
 conn.createStatement().execute("CREATE SEQUENCE "+seqName);
+BaseTest.createSchema(getUrl(), fullTableName, null);
--- End diff --

no, this is not needed , I have removed it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1311 HBase namespaces surfaced in ph...

2016-04-08 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/153#discussion_r58994197
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java ---
@@ -632,6 +645,16 @@ public static ExecutionCommand parseArgs(String[] 
args) {
 return execCmd;
 }
 
+private static String validateTableName(String tableName) {
--- End diff --

no this is to validate during upgrade only, if user is passing tablename 
with ":" by mistake in thought of hbase tablename.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1311 HBase namespaces surfaced in ph...

2016-04-08 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/153#discussion_r58994035
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/stats/StatisticsCollectorFactory.java
 ---
@@ -63,6 +64,10 @@ public static StatisticsCollector 
createStatisticsCollector(
 
DISABLE_STATS.add(TableName.valueOf(PhoenixDatabaseMetaData.SYSTEM_FUNCTION_NAME));
 
DISABLE_STATS.add(TableName.valueOf(PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_NAME));
 
DISABLE_STATS.add(TableName.valueOf(PhoenixDatabaseMetaData.SYSTEM_STATS_NAME));
+
DISABLE_STATS.add(SchemaUtil.getPhysicalTableName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES,true));
--- End diff --

same as above, it just to avoid collecting stats for system table, so list 
contains both.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1311 HBase namespaces surfaced in ph...

2016-04-08 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/153#discussion_r58993926
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/PTableKey.java ---
@@ -26,7 +28,8 @@
 public PTableKey(PName tenantId, String name) {
 Preconditions.checkNotNull(name);
 this.tenantId = tenantId;
-this.name = name;
+this.name = !name.contains(QueryConstants.NAMESPACE_SEPARATOR) ? 
name
--- End diff --

We are still maintaining phoenix table name(A.B) in cache. so that's why if 
by mistake physical table is passed as a key , it should be resolve correctly. 
I think I just kept this for precaution only. LocalIndexIT/ViewIndexIT covers 
it right?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1311 HBase namespaces surfaced in ph...

2016-04-08 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/153#discussion_r58993453
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java ---
@@ -631,7 +664,7 @@ private boolean 
addIndexesFromPhysicalTable(MetaDataMutationResult result, Long
 if (view.getType() != PTableType.VIEW || view.getViewType() == 
ViewType.MAPPED) {
 return false;
 }
-String physicalName = view.getPhysicalName().getString();
+String physicalName = view.getPhysicalNames().get(0).toString();
--- End diff --

Yes, we don't support views over multiple tables that's why I used get(0) 
only. But I have undo it by modifying the other api's to resolve schema and 
table-name correctly even if namespace separator is present. like 
SchemaUtil.getTableNameFromFullName()


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1311 HBase namespaces surfaced in ph...

2016-04-08 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/153#discussion_r58992390
  
--- Diff: 
phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/controller/MetadataRpcController.java
 ---
@@ -36,7 +37,16 @@
.add(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME)
.add(PhoenixDatabaseMetaData.SYSTEM_STATS_NAME)
.add(PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_NAME)
-   
.add(PhoenixDatabaseMetaData.SYSTEM_FUNCTION_NAME).build();
+   .add(PhoenixDatabaseMetaData.SYSTEM_FUNCTION_NAME)
+
.add(SchemaUtil.getPhysicalTableName(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES,
 true)
--- End diff --

yeah, namespace mapping for system tables will not always be true as it is 
controlled with phoenix.connection.mapSystemTablesToNamespace, 
but this is just the list of system tables, so it includes SYSTEM.CATAOG 
and SYSTEM:CATALOG both to set the priority for RPC accordingly. 
do you think, it could have any impact?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1311 HBase namespaces surfaced in ph...

2016-04-08 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/153#discussion_r58991938
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryDatabaseMetaDataIT.java
 ---
@@ -190,14 +190,8 @@ public void testSchemaMetadataScan() throws 
SQLException {
 
 rs = dbmd.getSchemas(null, null);
 assertTrue(rs.next());
-assertEquals(rs.getString("TABLE_SCHEM"),null);
--- End diff --

As now , we will show only those schemas which are created with "create 
schema" command and having table as empty string. so null is not expected


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1311 HBase namespaces surfaced in ph...

2016-04-07 Thread ankitsinghal
Github user ankitsinghal commented on the pull request:

https://github.com/apache/phoenix/pull/153#issuecomment-207058780
  
@samarthjain , any more review comments?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1311 HBase namespaces surfaced in ph...

2016-04-06 Thread ankitsinghal
Github user ankitsinghal commented on the pull request:

https://github.com/apache/phoenix/pull/153#issuecomment-206406674
  
Thanks @samarthjain  for the review, I have incorporated the review 
comments in last commit.
I'll write a test for upgrade code soon though I have tested it on local 
cluster for tables with views and indexes.
please proceed with your further review.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1311 HBase namespaces surfaced in ph...

2016-04-06 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/153#discussion_r58715199
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java ---
@@ -265,12 +332,18 @@ public SingleTableColumnResolver(PhoenixConnection 
connection, NamedTableNode ta
if (def.getColumnDefName().getFamilyName() != null) {
families.add(new 
PColumnFamilyImpl(PNameFactory.newName(def.getColumnDefName().getFamilyName()),Collections.emptyList()));
}
-   }
-   Long scn = connection.getSCN();
-   PTable theTable = new PTableImpl(connection.getTenantId(), 
table.getName().getSchemaName(), table.getName().getTableName(), scn == null ? 
HConstants.LATEST_TIMESTAMP : scn, families);
+}
+Long scn = connection.getSCN();
+String schema = table.getName().getSchemaName();
+if (connection.getSchema() != null) {
--- End diff --

Actually, when the schema for the table{T} is not present, we try to 
resolve table with connection schema {.T} if set in 
connection.
I have added a test case UseSchemaIT#testMappedView, is it you are 
expecting? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1311 HBase namespaces surfaced in ph...

2016-04-06 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/153#discussion_r58703359
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java ---
@@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface 
metaTable, HTableInterface s
 }
 return false;
 }
+
+public static void mapTableToNamespace(HBaseAdmin admin, 
HTableInterface metatable, String srcTableName,
--- End diff --

Yeah, I'll add a test for upgrade code.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1311 HBase namespaces surfaced in ph...

2016-04-06 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/153#discussion_r58701167
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java ---
@@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface 
metaTable, HTableInterface s
 }
 return false;
 }
+
+public static void mapTableToNamespace(HBaseAdmin admin, 
HTableInterface metatable, String srcTableName,
+String destTableName, ReadOnlyProps props, Long ts, String 
phoenixTableName, PTableType pTableType)
+throws SnapshotCreationException, 
IllegalArgumentException, IOException, InterruptedException,
+SQLException {
+srcTableName = SchemaUtil.normalizeIdentifier(srcTableName);
+if (!SchemaUtil.isNamespaceMappingEnabled(
+SchemaUtil.isSystemTable(srcTableName.getBytes()) ? 
PTableType.SYSTEM : null,
+props)) { throw new 
IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes())
+? "For system table " + 
QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE
++ " also needs to be enabled along with " 
+ QueryServices.IS_NAMESPACE_MAPPING_ENABLED
+: QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " 
is not enabled"); }
+
+if (PTableType.TABLE.equals(pTableType) || 
PTableType.INDEX.equals(pTableType)) {
+admin.snapshot(srcTableName, srcTableName);
+admin.cloneSnapshot(srcTableName.getBytes(), 
destTableName.getBytes());
+admin.disableTable(srcTableName);
+admin.deleteTable(srcTableName);
+}
+if (phoenixTableName == null) {
+phoenixTableName = srcTableName;
+}
+Put put = new Put(SchemaUtil.getTableKey(null, 
SchemaUtil.getSchemaNameFromFullName(phoenixTableName),
+SchemaUtil.getTableNameFromFullName(phoenixTableName)), 
ts);
+put.addColumn(QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, 
PhoenixDatabaseMetaData.IS_NAMESPACE_MAPPED_BYTES,
+PBoolean.INSTANCE.toBytes(Boolean.TRUE));
+metatable.put(put);
+}
+
+public static void mapTableToNamespace(HBaseAdmin admin, 
HTableInterface metatable, String tableName,
+ReadOnlyProps props, Long ts) throws 
SnapshotCreationException, IllegalArgumentException, IOException,
+InterruptedException, SQLException {
+String destTablename = SchemaUtil
+
.normalizeIdentifier(SchemaUtil.getPhysicalTableName(tableName, 
props).getNameAsString());
+mapTableToNamespace(admin, metatable, tableName, destTablename, 
props, ts, null, PTableType.TABLE);
+}
+
+public static void upgradeTable(PhoenixConnection conn, String 
srcTable) throws SQLException,
+SnapshotCreationException, IllegalArgumentException, 
IOException, InterruptedException {
+ReadOnlyProps readOnlyProps = conn.getQueryServices().getProps();
+if (conn.getClientInfo(PhoenixRuntime.TENANT_ID_ATTRIB) != null) { 
throw new SQLException(
--- End diff --

yes .. Now, I'm throwing exception if schema is set in connection.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1311 HBase namespaces surfaced in ph...

2016-04-06 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/153#discussion_r58700415
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java ---
@@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface 
metaTable, HTableInterface s
 }
 return false;
 }
+
+public static void mapTableToNamespace(HBaseAdmin admin, 
HTableInterface metatable, String srcTableName,
+String destTableName, ReadOnlyProps props, Long ts, String 
phoenixTableName, PTableType pTableType)
+throws SnapshotCreationException, 
IllegalArgumentException, IOException, InterruptedException,
+SQLException {
+srcTableName = SchemaUtil.normalizeIdentifier(srcTableName);
+if (!SchemaUtil.isNamespaceMappingEnabled(
+SchemaUtil.isSystemTable(srcTableName.getBytes()) ? 
PTableType.SYSTEM : null,
+props)) { throw new 
IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes())
+? "For system table " + 
QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE
++ " also needs to be enabled along with " 
+ QueryServices.IS_NAMESPACE_MAPPING_ENABLED
+: QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " 
is not enabled"); }
+
+if (PTableType.TABLE.equals(pTableType) || 
PTableType.INDEX.equals(pTableType)) {
+admin.snapshot(srcTableName, srcTableName);
+admin.cloneSnapshot(srcTableName.getBytes(), 
destTableName.getBytes());
+admin.disableTable(srcTableName);
+admin.deleteTable(srcTableName);
+}
+if (phoenixTableName == null) {
--- End diff --

phoenixTableName and physical table name are different in case of indexes 
.. so it is required there .. but now in my latest commit , I have removed the 
null check and pass the phoenixTableName always.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1311 HBase namespaces surfaced in ph...

2016-04-06 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/153#discussion_r58692384
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java ---
@@ -1279,4 +1304,129 @@ public static boolean truncateStats(HTableInterface 
metaTable, HTableInterface s
 }
 return false;
 }
+
+public static void mapTableToNamespace(HBaseAdmin admin, 
HTableInterface metatable, String srcTableName,
+String destTableName, ReadOnlyProps props, Long ts, String 
phoenixTableName, PTableType pTableType)
+throws SnapshotCreationException, 
IllegalArgumentException, IOException, InterruptedException,
+SQLException {
+srcTableName = SchemaUtil.normalizeIdentifier(srcTableName);
+if (!SchemaUtil.isNamespaceMappingEnabled(
+SchemaUtil.isSystemTable(srcTableName.getBytes()) ? 
PTableType.SYSTEM : null,
+props)) { throw new 
IllegalArgumentException(SchemaUtil.isSystemTable(srcTableName.getBytes())
+? "For system table " + 
QueryServices.IS_SYSTEM_TABLE_MAPPED_TO_NAMESPACE
++ " also needs to be enabled along with " 
+ QueryServices.IS_NAMESPACE_MAPPING_ENABLED
+: QueryServices.IS_NAMESPACE_MAPPING_ENABLED + " 
is not enabled"); }
+
+if (PTableType.TABLE.equals(pTableType) || 
PTableType.INDEX.equals(pTableType)) {
+admin.snapshot(srcTableName, srcTableName);
+admin.cloneSnapshot(srcTableName.getBytes(), 
destTableName.getBytes());
+admin.disableTable(srcTableName);
--- End diff --

Deleting srcTable will not delete the snapshot. 
And, Yes for restore only ,I thought we should not delete snapshot


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1311 HBase namespaces surfaced in ph...

2016-04-06 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/153#discussion_r58687494
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/util/SchemaUtil.java ---
@@ -897,4 +941,86 @@ public static boolean hasRowTimestampColumn(PTable 
table) {
 PName schemaName = dataTable.getSchemaName();
 return getTableKey(tenantId == null ? ByteUtil.EMPTY_BYTE_ARRAY : 
tenantId.getBytes(), schemaName == null ? ByteUtil.EMPTY_BYTE_ARRAY : 
schemaName.getBytes(), dataTable.getTableName().getBytes());
 }
+
+public static byte[] getSchemaKey(String schemaName) {
+return SchemaUtil.getTableKey(null, schemaName, 
MetaDataClient.EMPTY_TABLE);
+}
+
+public static PName getPhysicalHBaseTableName(PName pName, boolean 
isNamespaceMapped, PTableType type) {
+return getPhysicalHBaseTableName(pName.toString(), 
isNamespaceMapped, type);
+}
+
+public static PName getPhysicalHBaseTableName(byte[] tableName, 
boolean isNamespaceMapped, PTableType type) {
+return getPhysicalHBaseTableName(Bytes.toString(tableName), 
isNamespaceMapped, type);
+}
+
+public static TableName getPhysicalTableName(String fullTableName, 
ReadOnlyProps readOnlyProps) {
+return getPhysicalName(Bytes.toBytes(fullTableName), 
readOnlyProps);
+}
+
+public static TableName getPhysicalTableName(byte[] fullTableName, 
Configuration conf) {
+return getPhysicalTableName(fullTableName, 
isNamespaceMappingEnabled(
+isSystemTable(fullTableName) ? PTableType.SYSTEM : null, 
new ReadOnlyProps(conf.iterator(;
+}
+
+public static TableName getPhysicalName(byte[] fullTableName, 
ReadOnlyProps readOnlyProps) {
+return getPhysicalTableName(fullTableName,
+isNamespaceMappingEnabled(isSystemTable(fullTableName) ? 
PTableType.SYSTEM : null, readOnlyProps));
+}
+
+public static TableName getPhysicalTableName(byte[] fullTableName, 
boolean isNamespaceMappingEnabled) {
+if (!isNamespaceMappingEnabled) { return 
TableName.valueOf(fullTableName); }
+String tableName = getTableNameFromFullName(fullTableName);
+String schemaName = getSchemaNameFromFullName(fullTableName);
+return TableName.valueOf(schemaName, tableName);
+}
+
+public static String getSchemaNameFromHBaseFullName(byte[] tableName, 
ReadOnlyProps props) {
+if (tableName == null) { return null; }
+int index = isNamespaceMappingEnabled(null, props) ? 
indexOf(tableName, QueryConstants.NAMESPACE_SEPARATOR_BYTE)
+: indexOf(tableName, QueryConstants.NAME_SEPARATOR_BYTE);
+if (index < 0) { return StringUtil.EMPTY_STRING; }
+return Bytes.toString(tableName, 0, index);
+}
+
+public static PName getPhysicalHBaseTableName(String tableName, 
boolean isNamespaceMapped, PTableType type) {
+if (!isNamespaceMapped) { return PNameFactory.newName(tableName); }
+return PNameFactory
+.newName(tableName.replace(QueryConstants.NAME_SEPARATOR, 
QueryConstants.NAMESPACE_SEPARATOR));
+}
+
+public static boolean isSchemaCheckRequired(PTableType tableType, 
ReadOnlyProps props) {
+if (PTableType.TABLE.equals(tableType) && 
isNamespaceMappingEnabled(tableType, props)) { return true; }
+return false;
+}
+
+public static boolean isNamespaceMappingEnabled(PTableType type, 
ReadOnlyProps readOnlyProps) {
+return 
readOnlyProps.getBoolean(QueryServices.IS_NAMESPACE_MAPPING_ENABLED,
+QueryServicesOptions.DEFAULT_IS_NAMESPACE_MAPPING_ENABLED)
+&& (type == null || !PTableType.SYSTEM.equals(type)
--- End diff --

Yes, null is allowed for type here. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2722 support mysql limit,offset clau...

2016-04-06 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/154#discussion_r58678464
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/OffsetScanGrouper.java ---
@@ -0,0 +1,44 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.iterate;
+
+import java.util.List;
+
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.phoenix.compile.QueryPlan;
+
+/**
+ * Default implementation that creates a scan group if a plan is row key 
ordered (which requires a merge sort),
+ * or if a scan crosses a region boundary and the table is salted or a 
local index.   
+ */
+public class OffsetScanGrouper implements ParallelScanGrouper {
--- End diff --

It seems for serialIterators also , we use DefaultParallelScanGrouper for 
splitting scans in groups by condition `(isSalted || table.getIndexType() == 
IndexType.LOCAL) && ScanUtil.shouldRowsBeInRowKeyOrder(orderBy, context)` and 
run each group in one thread.

Anyways, I have modified the plan to run offset on server by including this 
condition and re-use the DefaultParallelScanGrouper


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2722 support mysql limit,offset clau...

2016-04-05 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/154#discussion_r58591817
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/SubselectRewriter.java ---
@@ -188,6 +190,13 @@ private SelectStatement flatten(SelectStatement 
select, SelectStatement subselec
 }
 }
 
+OffsetNode offset = select.getOffset();
+if (offsetRewrite != null || (limitRewrite != null & offset != 
null)) {
--- End diff --

@JamesRTaylor , will it really affect the boolean operands.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2722 support mysql limit,offset clau...

2016-04-05 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/154#discussion_r58588402
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/OffsetScanGrouper.java ---
@@ -0,0 +1,44 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.iterate;
+
+import java.util.List;
+
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.phoenix.compile.QueryPlan;
+
+/**
+ * Default implementation that creates a scan group if a plan is row key 
ordered (which requires a merge sort),
+ * or if a scan crosses a region boundary and the table is salted or a 
local index.   
+ */
+public class OffsetScanGrouper implements ParallelScanGrouper {
--- End diff --

It is needed to avoid scans in parallel even in serialIterators. should I 
use a anonymous class while creating SerialIterators to do offset on server?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2722 support mysql limit,offset clau...

2016-04-05 Thread ankitsinghal
Github user ankitsinghal commented on the pull request:

https://github.com/apache/phoenix/pull/154#issuecomment-205768420
  
Thanks @JamesRTaylor for review.. I have made the changes are per your 
review comments.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2722 support mysql limit,offset clau...

2016-04-05 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/154#discussion_r58523285
  
--- Diff: 
phoenix-core/src/test/java/org/apache/phoenix/execute/CorrelatePlanTest.java ---
@@ -172,17 +172,38 @@ public void testCorrelatePlanWithSingleValueOnly() 
throws SQLException {
 {2, "2", "2", 20},
 {5, "5", "5", 100},
 };
-testCorrelatePlan(LEFT_RELATION, rightRelation, 1, 0, 
JoinType.Inner, expected);
+testCorrelatePlan(LEFT_RELATION, rightRelation, 1, 0, 
JoinType.Inner, expected);
 }
 
-private void testCorrelatePlan(Object[][] leftRelation, Object[][] 
rightRelation, int leftCorrelColumn, int rightCorrelColumn, JoinType type, 
Object[][] expectedResult) throws SQLException {
+@Test
--- End diff --

Added in a last commit.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2722 support mysql limit,offset clau...

2016-04-05 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/154#discussion_r58523267
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/OrderedResultIterator.java
 ---
@@ -130,9 +135,9 @@ public OrderedResultIterator(ResultIterator delegate, 
List or
 SizedUtil.OBJECT_SIZE + estimatedRowSize;
 
 // Make sure we don't overflow Long, though this is really 
unlikely to happen.
-assert(limit == null || Long.MAX_VALUE / estimatedEntrySize >= 
limit);
+assert (limit == null || Long.MAX_VALUE / estimatedEntrySize >= 
limit + this.offset);
--- End diff --

I checked and there was no space before assert.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2722 support mysql limit,offset clau...

2016-04-05 Thread ankitsinghal
Github user ankitsinghal commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/154#discussion_r58512767
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/TableSerialIterators.java 
---
@@ -0,0 +1,129 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.iterate;
+
+import static org.apache.phoenix.monitoring.MetricType.SCAN_BYTES;
+
+import java.sql.SQLException;
+import java.util.Collections;
+import java.util.List;
+import java.util.Queue;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Future;
+
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.phoenix.compile.QueryPlan;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.job.JobManager.JobCallable;
+import org.apache.phoenix.monitoring.TaskExecutionMetricsHolder;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.trace.util.Tracing;
+import org.apache.phoenix.util.ScanUtil;
+
+import com.google.common.base.Preconditions;
+import com.google.common.collect.Lists;
+
+
+/**
+ *
+ * Class that parallelizes the scan over a table using the ExecutorService 
provided.  Each region of the table will be scanned in parallel with
+ * the results accessible through {@link #getIterators()}
+ *
+ * 
+ * @since 0.1
+ */
+public class TableSerialIterators extends BaseResultIterators {
--- End diff --

Yes, I am able re-used it now.. earlier I was facing some issues..


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   >