[GitHub] phoenix issue #238: PHOENIX-3690 Support declaring default values in Phoenix...

2017-04-02 Thread chrajeshbabu
Github user chrajeshbabu commented on the issue:

https://github.com/apache/phoenix/pull/238
  
+1 Kevin. Thanks for the update. Will commit it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-11-30 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r90192895
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForPartialBuildIT.java
 ---
@@ -217,39 +223,47 @@ public void testSecondaryIndex() throws Exception {
 
 assertFalse(rs.next());
 
-conn.createStatement().execute(String.format("DROP INDEX  %s 
ON %s", indxTable, fullTableName));
+   // conn.createStatement().execute(String.format("DROP INDEX  %s 
ON %s", indxTable, fullTableName));
--- End diff --

We can remove this commented code.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-11-30 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r90192647
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/IndexTool.java ---
@@ -167,50 +180,152 @@ private void printHelpAndExit(Options options, int 
exitCode) {
 formatter.printHelp("help", options);
 System.exit(exitCode);
 }
+
+class JobFactory {
+Connection connection;
+Configuration configuration;
+private Path outputPath;
 
-@Override
-public int run(String[] args) throws Exception {
-Connection connection = null;
-try {
-CommandLine cmdLine = null;
-try {
-cmdLine = parseOptions(args);
-} catch (IllegalStateException e) {
-printHelpAndExit(e.getMessage(), getOptions());
+public JobFactory(Connection connection, Configuration 
configuration, Path outputPath) {
+this.connection = connection;
+this.configuration = configuration;
+this.outputPath = outputPath;
+
+}
+
+public Job getJob(String schemaName, String indexTable, String 
dataTable, boolean useDirectApi) throws Exception {
+if (indexTable == null) {
+return configureJobForPartialBuild(schemaName, dataTable);
+} else {
+return configureJobForAysncIndex(schemaName, indexTable, 
dataTable, useDirectApi);
 }
-final Configuration configuration = 
HBaseConfiguration.addHbaseResources(getConf());
-final String schemaName = 
cmdLine.getOptionValue(SCHEMA_NAME_OPTION.getOpt());
-final String dataTable = 
cmdLine.getOptionValue(DATA_TABLE_OPTION.getOpt());
-final String indexTable = 
cmdLine.getOptionValue(INDEX_TABLE_OPTION.getOpt());
+}
+
+private Job configureJobForPartialBuild(String schemaName, String 
dataTable) throws Exception {
 final String qDataTable = 
SchemaUtil.getQualifiedTableName(schemaName, dataTable);
-final String qIndexTable = 
SchemaUtil.getQualifiedTableName(schemaName, indexTable);
-
+final PTable pdataTable = PhoenixRuntime.getTable(connection, 
qDataTable);
 connection = ConnectionUtil.getInputConnection(configuration);
-if (!isValidIndexTable(connection, qDataTable, indexTable)) {
-throw new IllegalArgumentException(String.format(
-" %s is not an index table for %s ", qIndexTable, 
qDataTable));
+long minDisableTimestamp = HConstants.LATEST_TIMESTAMP;
+PTable indexWithMinDisableTimestamp = null;
+
+//Get Indexes in building state, minDisabledTimestamp 
+List disableIndexes = new ArrayList();
+List disabledPIndexes = new ArrayList();
+for (PTable index : pdataTable.getIndexes()) {
+if (index.getIndexState().equals(PIndexState.BUILDING)) {
+disableIndexes.add(index.getTableName().getString());
+disabledPIndexes.add(index);
+if (minDisableTimestamp > 
index.getIndexDisableTimestamp()) {
+minDisableTimestamp = 
index.getIndexDisableTimestamp();
+indexWithMinDisableTimestamp = index;
+}
+}
+}
+
+if (indexWithMinDisableTimestamp == null) {
+throw new Exception("There is no index for a datatable to 
be rebuild:" + qDataTable);
 }
+if (minDisableTimestamp == 0) {
+throw new Exception("It seems Index " + 
indexWithMinDisableTimestamp
++ " has disable timestamp as 0 , please run 
IndexTool with IndexName to build it first");
+// TODO probably we can initiate the job by ourself or can 
skip them while making the list for partial build with a warning
+}
+
+long maxTimestamp = getMaxRebuildAsyncDate(schemaName, 
disableIndexes);
+
+//serialize index maintaienr in job conf with Base64 TODO: 
Need to find better way to serialize them in conf.
+List maintainers = 
Lists.newArrayListWithExpectedSize(disabledPIndexes.size());
+for (PTable index : disabledPIndexes) {
+maintainers.add(index.getIndexMaintainer(pdataTable, 
connection.unwrap(PhoenixConnection.class)));
+}
+ImmutableBytesWr

[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-11-30 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r90188400
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java ---
@@ -305,16 +305,27 @@
 TENANT_ID + "," +
 TABLE_SCHEM + "," +
 TABLE_NAME + "," +
-INDEX_STATE +
+INDEX_STATE + "," +
+ASYNC_REBUILD_TIMESTAMP + " " + 
PLong.INSTANCE.getSqlTypeName() +
+") VALUES (?, ?, ?, ?, ?)";
+
+private static final String UPDATE_INDEX_REBUILD_ASYNC_STATE =
+"UPSERT INTO " + SYSTEM_CATALOG_SCHEMA + ".\"" + 
SYSTEM_CATALOG_TABLE + "\"( " +
+TENANT_ID + "," +
+TABLE_SCHEM + "," +
+TABLE_NAME + "," +
+ASYNC_REBUILD_TIMESTAMP + " " + 
PLong.INSTANCE.getSqlTypeName() +
 ") VALUES (?, ?, ?, ?)";
+
 private static final String UPDATE_INDEX_STATE_TO_ACTIVE =
 "UPSERT INTO " + SYSTEM_CATALOG_SCHEMA + ".\"" + 
SYSTEM_CATALOG_TABLE + "\"( " +
 TENANT_ID + "," +
 TABLE_SCHEM + "," +
 TABLE_NAME + "," +
 INDEX_STATE + "," +
-INDEX_DISABLE_TIMESTAMP +
-") VALUES (?, ?, ?, ?, ?)";
+INDEX_DISABLE_TIMESTAMP +","+
+ASYNC_REBUILD_TIMESTAMP + " " + 
PLong.INSTANCE.getSqlTypeName() +
--- End diff --

Ok. It's fine. Thanks Ankit



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-10-18 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r83832270
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java ---
@@ -305,16 +305,27 @@
 TENANT_ID + "," +
 TABLE_SCHEM + "," +
 TABLE_NAME + "," +
-INDEX_STATE +
+INDEX_STATE + "," +
+ASYNC_REBUILD_TIMESTAMP + " " + 
PLong.INSTANCE.getSqlTypeName() +
+") VALUES (?, ?, ?, ?, ?)";
+
+private static final String UPDATE_INDEX_REBUILD_ASYNC_STATE =
+"UPSERT INTO " + SYSTEM_CATALOG_SCHEMA + ".\"" + 
SYSTEM_CATALOG_TABLE + "\"( " +
+TENANT_ID + "," +
+TABLE_SCHEM + "," +
+TABLE_NAME + "," +
+ASYNC_REBUILD_TIMESTAMP + " " + 
PLong.INSTANCE.getSqlTypeName() +
 ") VALUES (?, ?, ?, ?)";
+
 private static final String UPDATE_INDEX_STATE_TO_ACTIVE =
 "UPSERT INTO " + SYSTEM_CATALOG_SCHEMA + ".\"" + 
SYSTEM_CATALOG_TABLE + "\"( " +
 TENANT_ID + "," +
 TABLE_SCHEM + "," +
 TABLE_NAME + "," +
 INDEX_STATE + "," +
-INDEX_DISABLE_TIMESTAMP +
-") VALUES (?, ?, ?, ?, ?)";
+INDEX_DISABLE_TIMESTAMP +","+
+ASYNC_REBUILD_TIMESTAMP + " " + 
PLong.INSTANCE.getSqlTypeName() +
--- End diff --

Here ASYNC_REBUILD_TIMESTAMP is dynamic column. Are there any problems if 
we use dynamic columns for system tables? why can't you make normal column?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-10-18 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r83827883
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/IndexTool.java ---
@@ -167,50 +180,152 @@ private void printHelpAndExit(Options options, int 
exitCode) {
 formatter.printHelp("help", options);
 System.exit(exitCode);
 }
+
+class JobFactory {
+Connection connection;
+Configuration configuration;
+private Path outputPath;
 
-@Override
-public int run(String[] args) throws Exception {
-Connection connection = null;
-try {
-CommandLine cmdLine = null;
-try {
-cmdLine = parseOptions(args);
-} catch (IllegalStateException e) {
-printHelpAndExit(e.getMessage(), getOptions());
+public JobFactory(Connection connection, Configuration 
configuration, Path outputPath) {
+this.connection = connection;
+this.configuration = configuration;
+this.outputPath = outputPath;
+
+}
+
+public Job getJob(String schemaName, String indexTable, String 
dataTable, boolean useDirectApi) throws Exception {
+if (indexTable == null) {
+return configureJobForPartialBuild(schemaName, dataTable);
+} else {
+return configureJobForAysncIndex(schemaName, indexTable, 
dataTable, useDirectApi);
 }
-final Configuration configuration = 
HBaseConfiguration.addHbaseResources(getConf());
-final String schemaName = 
cmdLine.getOptionValue(SCHEMA_NAME_OPTION.getOpt());
-final String dataTable = 
cmdLine.getOptionValue(DATA_TABLE_OPTION.getOpt());
-final String indexTable = 
cmdLine.getOptionValue(INDEX_TABLE_OPTION.getOpt());
+}
+
+private Job configureJobForPartialBuild(String schemaName, String 
dataTable) throws Exception {
 final String qDataTable = 
SchemaUtil.getQualifiedTableName(schemaName, dataTable);
-final String qIndexTable = 
SchemaUtil.getQualifiedTableName(schemaName, indexTable);
-
+final PTable pdataTable = PhoenixRuntime.getTable(connection, 
qDataTable);
 connection = ConnectionUtil.getInputConnection(configuration);
-if (!isValidIndexTable(connection, qDataTable, indexTable)) {
-throw new IllegalArgumentException(String.format(
-" %s is not an index table for %s ", qIndexTable, 
qDataTable));
+long minDisableTimestamp = HConstants.LATEST_TIMESTAMP;
+PTable indexWithMinDisableTimestamp = null;
+
+//Get Indexes in building state, minDisabledTimestamp 
+List disableIndexes = new ArrayList();
+List disabledPIndexes = new ArrayList();
+for (PTable index : pdataTable.getIndexes()) {
+if (index.getIndexState().equals(PIndexState.BUILDING)) {
+disableIndexes.add(index.getTableName().getString());
+disabledPIndexes.add(index);
+if (minDisableTimestamp > 
index.getIndexDisableTimestamp()) {
+minDisableTimestamp = 
index.getIndexDisableTimestamp();
+indexWithMinDisableTimestamp = index;
+}
+}
+}
+
+if (indexWithMinDisableTimestamp == null) {
+throw new Exception("There is no index for a datatable to 
be rebuild:" + qDataTable);
 }
+if (minDisableTimestamp == 0) {
+throw new Exception("It seems Index " + 
indexWithMinDisableTimestamp
++ " has disable timestamp as 0 , please run 
IndexTool with IndexName to build it first");
+// TODO probably we can initiate the job by ourself or can 
skip them while making the list for partial build with a warning
+}
+
+long maxTimestamp = getMaxRebuildAsyncDate(schemaName, 
disableIndexes);
+
+//serialize index maintaienr in job conf with Base64 TODO: 
Need to find better way to serialize them in conf.
+List maintainers = 
Lists.newArrayListWithExpectedSize(disabledPIndexes.size());
+for (PTable index : disabledPIndexes) {
+maintainers.add(index.getIndexMaintainer(pdataTable, 
connection.unwrap(PhoenixConnection.class)));
+}
+ImmutableBytesWr

[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-10-18 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r83827393
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/IndexTool.java ---
@@ -167,50 +180,152 @@ private void printHelpAndExit(Options options, int 
exitCode) {
 formatter.printHelp("help", options);
 System.exit(exitCode);
 }
+
+class JobFactory {
+Connection connection;
+Configuration configuration;
+private Path outputPath;
 
-@Override
-public int run(String[] args) throws Exception {
-Connection connection = null;
-try {
-CommandLine cmdLine = null;
-try {
-cmdLine = parseOptions(args);
-} catch (IllegalStateException e) {
-printHelpAndExit(e.getMessage(), getOptions());
+public JobFactory(Connection connection, Configuration 
configuration, Path outputPath) {
+this.connection = connection;
+this.configuration = configuration;
+this.outputPath = outputPath;
+
+}
+
+public Job getJob(String schemaName, String indexTable, String 
dataTable, boolean useDirectApi) throws Exception {
+if (indexTable == null) {
+return configureJobForPartialBuild(schemaName, dataTable);
+} else {
+return configureJobForAysncIndex(schemaName, indexTable, 
dataTable, useDirectApi);
 }
-final Configuration configuration = 
HBaseConfiguration.addHbaseResources(getConf());
-final String schemaName = 
cmdLine.getOptionValue(SCHEMA_NAME_OPTION.getOpt());
-final String dataTable = 
cmdLine.getOptionValue(DATA_TABLE_OPTION.getOpt());
-final String indexTable = 
cmdLine.getOptionValue(INDEX_TABLE_OPTION.getOpt());
+}
+
+private Job configureJobForPartialBuild(String schemaName, String 
dataTable) throws Exception {
 final String qDataTable = 
SchemaUtil.getQualifiedTableName(schemaName, dataTable);
-final String qIndexTable = 
SchemaUtil.getQualifiedTableName(schemaName, indexTable);
-
+final PTable pdataTable = PhoenixRuntime.getTable(connection, 
qDataTable);
 connection = ConnectionUtil.getInputConnection(configuration);
-if (!isValidIndexTable(connection, qDataTable, indexTable)) {
-throw new IllegalArgumentException(String.format(
-" %s is not an index table for %s ", qIndexTable, 
qDataTable));
+long minDisableTimestamp = HConstants.LATEST_TIMESTAMP;
+PTable indexWithMinDisableTimestamp = null;
+
+//Get Indexes in building state, minDisabledTimestamp 
+List disableIndexes = new ArrayList();
+List disabledPIndexes = new ArrayList();
+for (PTable index : pdataTable.getIndexes()) {
+if (index.getIndexState().equals(PIndexState.BUILDING)) {
+disableIndexes.add(index.getTableName().getString());
+disabledPIndexes.add(index);
+if (minDisableTimestamp > 
index.getIndexDisableTimestamp()) {
+minDisableTimestamp = 
index.getIndexDisableTimestamp();
+indexWithMinDisableTimestamp = index;
+}
+}
+}
+
+if (indexWithMinDisableTimestamp == null) {
+throw new Exception("There is no index for a datatable to 
be rebuild:" + qDataTable);
 }
+if (minDisableTimestamp == 0) {
+throw new Exception("It seems Index " + 
indexWithMinDisableTimestamp
++ " has disable timestamp as 0 , please run 
IndexTool with IndexName to build it first");
+// TODO probably we can initiate the job by ourself or can 
skip them while making the list for partial build with a warning
+}
+
+long maxTimestamp = getMaxRebuildAsyncDate(schemaName, 
disableIndexes);
+
+//serialize index maintaienr in job conf with Base64 TODO: 
Need to find better way to serialize them in conf.
+List maintainers = 
Lists.newArrayListWithExpectedSize(disabledPIndexes.size());
+for (PTable index : disabledPIndexes) {
+maintainers.add(index.getIndexMaintainer(pdataTable, 
connection.unwrap(PhoenixConnection.class)));
+}
+ImmutableBytesWr

[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-10-18 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r83824633
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/IndexTool.java ---
@@ -85,7 +102,7 @@
 private static final Option DATA_TABLE_OPTION = new Option("dt", 
"data-table", true,
 "Data table name (mandatory)");
 private static final Option INDEX_TABLE_OPTION = new Option("it", 
"index-table", true,
-"Index table name(mandatory)");
+"Index table name(not required in case of partial 
rebuilding)");
--- End diff --

Better to have an argument for partial rebuilding than go by rebuild if we 
don't mention index table.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-10-18 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r83795678
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexMetadataIT.java 
---
@@ -568,4 +580,52 @@ public void testAsyncCreatedDate() throws Exception {
 assertTrue(d2.after(d1));
 assertFalse(rs.next());
 }
+
+@Test
+public void testAsyncRebuildTimestamp() throws Exception {
+long l0 = System.currentTimeMillis();
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+conn.setAutoCommit(false);
+String testTable = generateUniqueName();
+
+
+String ddl = "create table " + testTable  + " (k varchar primary 
key, v1 varchar, v2 varchar, v3 varchar)";
+PreparedStatement stmt = conn.prepareStatement(ddl);
+stmt.execute();
+String indexName = "R_ASYNCIND_" + generateUniqueName();
+
+ddl = "CREATE INDEX " + indexName + "1 ON " + testTable  + " (v1) 
";
+stmt = conn.prepareStatement(ddl);
--- End diff --

You can use createStatement than prepareStatement here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-10-18 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r83795282
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexMetadataIT.java 
---
@@ -568,4 +580,52 @@ public void testAsyncCreatedDate() throws Exception {
 assertTrue(d2.after(d1));
 assertFalse(rs.next());
 }
+
+@Test
+public void testAsyncRebuildTimestamp() throws Exception {
+long l0 = System.currentTimeMillis();
--- End diff --

can you use meaning full names for variables l0 and l1.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-10-18 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r83795010
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexMetadataIT.java 
---
@@ -216,6 +219,15 @@ public void testIndexCreateDrop() throws Exception {
 assertFalse(rs.next());
 
 assertActiveIndex(conn, INDEX_DATA_SCHEMA, indexDataTable);
+
+ddl = "ALTER INDEX " + indexName + " ON " + INDEX_DATA_SCHEMA 
+ QueryConstants.NAME_SEPARATOR + indexDataTable + " REBUILD ASYNC";
+conn.createStatement().execute(ddl);
+// Verify the metadata for index is correct.
+rs = conn.getMetaData().getTables(null, 
StringUtil.escapeLike(INDEX_DATA_SCHEMA), indexName , new String[] 
{PTableType.INDEX.toString()});
+assertTrue(rs.next());
+assertEquals(indexName , rs.getString(3));
+assertEquals(PIndexState.BUILDING.toString(), 
rs.getString("INDEX_STATE"));
--- End diff --

This is making active index to rebuild can we add an assertion what is time 
from which we rebuild? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-10-18 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r83793592
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java ---
@@ -500,14 +500,12 @@ public void testOrderByWithExpression() throws 
Exception {
 stmt.execute();
 conn.commit();
 
-String query = "SELECT col1+col2, col4, a_string FROM " + 
tableName + " ORDER BY 1, 2";
--- End diff --

why this change needed?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-10-18 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r83793514
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForPartialBuildWithNamespaceEnabled.java
 ---
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Map;
+
+import org.apache.hadoop.hbase.HConstants;
+import 
org.apache.phoenix.end2end.index.MutableIndexFailureIT.FailingRegionObserver;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.query.QueryServicesOptions;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.junit.BeforeClass;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameters;
+
+import com.google.common.collect.Maps;
+
+/**
+ * Tests for the {@link IndexToolForPartialBuildWithNamespaceEnabled}
+ */
+@RunWith(Parameterized.class)
+public class IndexToolForPartialBuildWithNamespaceEnabled extends 
IndexToolForPartialBuildIT {
+
+
+public IndexToolForPartialBuildWithNamespaceEnabled(boolean 
localIndex, boolean isNamespaceEnabled) {
+super(localIndex);
+this.isNamespaceEnabled=isNamespaceEnabled;
+}
+
+@BeforeClass
+public static void doSetup() throws Exception {
+Map<String, String> serverProps = 
Maps.newHashMapWithExpectedSize(7);
+serverProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, 
QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
+serverProps.put("hbase.coprocessor.region.classes", 
FailingRegionObserver.class.getName());
+serverProps.put(HConstants.HBASE_CLIENT_RETRIES_NUMBER, "2");
+serverProps.put(HConstants.HBASE_RPC_TIMEOUT_KEY, "1");
+serverProps.put("hbase.client.pause", "5000");
+
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_BATCH_SIZE_ATTRIB, 
"2000");
+
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_INTERVAL_ATTRIB, 
"1000");
+serverProps.put(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, 
"true");
+Map<String, String> clientProps = 
Maps.newHashMapWithExpectedSize(1);
+clientProps.put(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, 
"true");
+setUpTestDriver(new 
ReadOnlyProps(serverProps.entrySet().iterator()), new 
ReadOnlyProps(clientProps.entrySet().iterator()));
+}
+
+@Parameters(name="localIndex = {0} , isNamespaceEnabled = {1}")
+public static Collection<Boolean[]> data() {
+return Arrays.asList(new Boolean[][] { 
+ { false, true},{ true, false }
--- End diff --

We can make IndexToolForPartialBuildIT itself to run with namespace enabled 
and disabled. We don't need extra IT test.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-10-18 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r83792778
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForPartialBuildIT.java
 ---
@@ -0,0 +1,264 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.UUID;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import 
org.apache.phoenix.end2end.index.MutableIndexFailureIT.FailingRegionObserver;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
+import org.apache.phoenix.mapreduce.index.IndexTool;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.query.QueryServicesOptions;
+import org.apache.phoenix.schema.PIndexState;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.StringUtil;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameters;
+
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+
+/**
+ * Tests for the {@link IndexToolForPartialBuildIT}
+ */
+@RunWith(Parameterized.class)
+public class IndexToolForPartialBuildIT extends BaseOwnClusterIT {
+
+private final boolean localIndex;
+protected boolean isNamespaceEnabled = false;
+protected final String tableDDLOptions;
+
+public IndexToolForPartialBuildIT(boolean localIndex) {
+
+this.localIndex = localIndex;
+StringBuilder optionBuilder = new StringBuilder();
+optionBuilder.append(" SPLIT ON(1,2)");
+this.tableDDLOptions = optionBuilder.toString();
+}
+
+@BeforeClass
+public static void doSetup() throws Exception {
+Map<String, String> serverProps = 
Maps.newHashMapWithExpectedSize(7);
+serverProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, 
QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
+serverProps.put("hbase.coprocessor.region.classes", 
FailingRegionObserver.class.getName());
+serverProps.put(" 
yarn.scheduler.capacity.maximum-am-resource-percent", "1.0");
+serverProps.put(HConstants.HBASE_CLIENT_RETRIES_NUMBER, "2");
+serverProps.put(HConstants.HBASE_RPC_TIMEOUT_KEY, "1");
+serverProps.put("hbase.client.pause", "5000");
+
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_BATCH_SIZE_ATTRIB, 
"1000");
+
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_INTERVAL_ATTRIB, 
"2000");
+Map<String, String> clientProps = 
Maps.newHashMapWithExpectedSize(1);
+setUpTestDriver(new 
ReadOnlyProps(serverProps.entrySet().iterator()), new 
ReadOnlyProps(clientProps.entrySet().iterator()));
+}
+
+@Par

[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-10-18 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r83792475
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForPartialBuildIT.java
 ---
@@ -0,0 +1,264 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.UUID;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import 
org.apache.phoenix.end2end.index.MutableIndexFailureIT.FailingRegionObserver;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
+import org.apache.phoenix.mapreduce.index.IndexTool;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.query.QueryServicesOptions;
+import org.apache.phoenix.schema.PIndexState;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.StringUtil;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameters;
+
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+
+/**
+ * Tests for the {@link IndexToolForPartialBuildIT}
+ */
+@RunWith(Parameterized.class)
+public class IndexToolForPartialBuildIT extends BaseOwnClusterIT {
+
+private final boolean localIndex;
+protected boolean isNamespaceEnabled = false;
+protected final String tableDDLOptions;
+
+public IndexToolForPartialBuildIT(boolean localIndex) {
+
+this.localIndex = localIndex;
+StringBuilder optionBuilder = new StringBuilder();
+optionBuilder.append(" SPLIT ON(1,2)");
+this.tableDDLOptions = optionBuilder.toString();
+}
+
+@BeforeClass
+public static void doSetup() throws Exception {
+Map<String, String> serverProps = 
Maps.newHashMapWithExpectedSize(7);
+serverProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, 
QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
+serverProps.put("hbase.coprocessor.region.classes", 
FailingRegionObserver.class.getName());
+serverProps.put(" 
yarn.scheduler.capacity.maximum-am-resource-percent", "1.0");
+serverProps.put(HConstants.HBASE_CLIENT_RETRIES_NUMBER, "2");
+serverProps.put(HConstants.HBASE_RPC_TIMEOUT_KEY, "1");
+serverProps.put("hbase.client.pause", "5000");
+
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_BATCH_SIZE_ATTRIB, 
"1000");
+
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_INTERVAL_ATTRIB, 
"2000");
+Map<String, String> clientProps = 
Maps.newHashMapWithExpectedSize(1);
+setUpTestDriver(new 
ReadOnlyProps(serverProps.entrySet().iterator()), new 
ReadOnlyProps(clientProps.entrySet().iterator()));
+}
+
+@Par

[GitHub] phoenix pull request #211: PHOENIX-3254 IndexId Sequence is incremented even...

2016-09-28 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/211#discussion_r80925250
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -1445,6 +1445,9 @@ public MetaDataResponse call(MetaDataService 
instance) throws IOException {
 builder.addTableMetadataMutations(mp.toByteString());
 }
 
builder.setClientVersion(VersionUtil.encodeVersion(PHOENIX_MAJOR_VERSION, 
PHOENIX_MINOR_VERSION, PHOENIX_PATCH_NUMBER));
+if (allocateIndexId) {
--- End diff --

nit: just format code here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #211: PHOENIX-3254 IndexId Sequence is incremented even...

2016-09-28 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/211#discussion_r80916280
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 ---
@@ -1499,6 +1502,53 @@ public void createTable(RpcController controller, 
CreateTableRequest request,
 cell.getTimestamp(), 
Type.codeToType(cell.getTypeByte()), bytes);
 cells.add(viewConstantCell);
 }
+Short indexId = null;
+if (request.hasAllocateIndexId() && 
request.getAllocateIndexId()) {
+String tenantIdStr = tenantIdBytes.length == 0 ? null 
: Bytes.toString(tenantIdBytes);
+final Properties props = new Properties();
+UpgradeUtil.doNotUpgradeOnFirstConnection(props);
+try (PhoenixConnection connection = 
DriverManager.getConnection(MetaDataUtil.getJdbcUrl(env), 
props).unwrap(PhoenixConnection.class)){
+PName physicalName = parentTable.getPhysicalName();
+int nSequenceSaltBuckets = 
connection.getQueryServices().getSequenceSaltBuckets();
+SequenceKey key = 
MetaDataUtil.getViewIndexSequenceKey(tenantIdStr, physicalName,
+nSequenceSaltBuckets, 
parentTable.isNamespaceMapped() );
+// TODO Review Earlier sequence was created at 
(SCN-1/LATEST_TIMESTAMP) and incremented at the client 
max(SCN,dataTable.getTimestamp), but it seems we should
+// use always LATEST_TIMESTAMP to avoid seeing 
wrong sequence values by different connection having SCN
+// or not. 
+long sequenceTimestamp = HConstants.LATEST_TIMESTAMP;
+try {
+
connection.getQueryServices().createSequence(key.getTenantId(), 
key.getSchemaName(), key.getSequenceName(),
--- End diff --

@ankitsinghal  This patch introducing inter table rpc calls but seems it's 
needed. What do you think of moving create sequence to meta data client and 
perform sequence increment alone here? So that atleast we can reduce the rpc 
calls here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix issue #202: PHOENIX-3193 Tracing UI cleanup

2016-09-08 Thread chrajeshbabu
Github user chrajeshbabu commented on the issue:

https://github.com/apache/phoenix/pull/202
  
@AyolaJayamaha  we want these improvements and cleanup to be in 4.8.1 
version. Any other things pending here? Is it ready for commit?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix issue #202: PHOENIX-3193 Tracing UI cleanup

2016-08-29 Thread chrajeshbabu
Github user chrajeshbabu commented on the issue:

https://github.com/apache/phoenix/pull/202
  
Here are couple of issues found one while starting traceserver and one 
while getting the results in UI.
Currently the eclipse jetty version used is 8.1.7.v20120910
From main pom.xml
8.1.7.v20120910 

`Exception in thread "main" java.lang.NoClassDefFoundError: 
javax/servlet/FilterRegistration
at 
org.eclipse.jetty.servlet.ServletContextHandler.(ServletContextHandler.java:134)
at 
org.eclipse.jetty.servlet.ServletContextHandler.(ServletContextHandler.java:114)
at 
org.eclipse.jetty.servlet.ServletContextHandler.(ServletContextHandler.java:102)
at 
org.eclipse.jetty.webapp.WebAppContext.(WebAppContext.java:181)
at org.apache.phoenix.tracingwebapp.http.Main.run(Main.java:72)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.phoenix.tracingwebapp.http.Main.main(Main.java:54)
Caused by: java.lang.ClassNotFoundException: 
javax.servlet.FilterRegistration
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 7 more
`

When I changed the jetty version to 7.6.19.v20160209 it's working fine? 
Aren't you facing it?
Once I do that again getting below exception and not able to read anything 
from trace table. 

`104933 [qtp1157440841-20] WARN org.eclipse.jetty.servlet.ServletHandler  - 
Error for /trace/
java.lang.NoClassDefFoundError: org/codehaus/jackson/map/ObjectMapper
at 
org.apache.phoenix.tracingwebapp.http.TraceServlet.getResults(TraceServlet.java:136)
at 
org.apache.phoenix.tracingwebapp.http.TraceServlet.searchTrace(TraceServlet.java:112)
at 
org.apache.phoenix.tracingwebapp.http.TraceServlet.doGet(TraceServlet.java:67)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:652)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:445)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:556)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:227)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1044)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:372)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:189)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:978)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:369)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:464)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:913)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:975)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:641)
at 
org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:231)
at 
org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
at 
org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:667)
at 
org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: 
org.codehaus.jackson.map.ObjectMapper
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader

[GitHub] phoenix issue #193: Improvements to Phoenix Web App

2016-08-26 Thread chrajeshbabu
Github user chrajeshbabu commented on the issue:

https://github.com/apache/phoenix/pull/193
  
@AyolaJayamaha  Currently we are trying to get the webapp related files 
form target directory in org.apache.phoenix.tracingwebapp.http.Main. Because of 
this we cannot use the runnable jar independently from the build. Can you also 
make changes in such a way that both runnable jars and webapp jars reside in 
build and pick them in the code?
{code}
URL location = domain.getCodeSource().getLocation();
String webappDirLocation = location.toString().split("target")[0] 
+"src/main/webapp";
Server server = new Server(port);
WebAppContext root = new WebAppContext();
{code}


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements(Rajesh...

2016-05-20 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/168#discussion_r64123714
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/SortMergeJoinIT.java ---
@@ -187,13 +186,13 @@ public void initTable() throws Exception {
 "CREATE LOCAL INDEX \"idx_supplier\" ON " + 
JOIN_SUPPLIER_TABLE_FULL_NAME + " (name)"
 }, {
 "SORT-MERGE-JOIN (LEFT) TABLES\n" +
-"CLIENT PARALLEL 1-WAY RANGE SCAN OVER " + 
MetaDataUtil.LOCAL_INDEX_TABLE_PREFIX + JOIN_SUPPLIER_TABLE_DISPLAY_NAME + " 
[-32768]\n" +
+"CLIENT PARALLEL 1-WAY RANGE SCAN OVER " 
+JOIN_SUPPLIER_TABLE_DISPLAY_NAME + " [1]\n" +
--- End diff --

I will work on this in other patch James. Again I need to change all the 
test cases.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements(Rajesh...

2016-05-20 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/168#discussion_r64123637
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2455,6 +2409,19 @@ public Void call() throws Exception {
 }
 
 if (currentServerSideTableTimeStamp < 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_8_0) {
+Properties props = 
PropertiesUtil.deepCopy(metaConnection.getClientInfo());
+
props.remove(PhoenixRuntime.CURRENT_SCN_ATTRIB);
+
props.remove(PhoenixRuntime.TENANT_ID_ATTRIB);
+PhoenixConnection conn =
+new 
PhoenixConnection(ConnectionQueryServicesImpl.this,
+
metaConnection.getURL(), props, metaConnection
+
.getMetaDataCache());
+try {
+
UpgradeUtil.upgradeLocalIndexes(conn);
--- End diff --

Agree with you James. Will do it in other patch.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements(Rajesh...

2016-05-16 Thread chrajeshbabu
Github user chrajeshbabu commented on the pull request:

https://github.com/apache/phoenix/pull/168#issuecomment-219617896
  
James as we discussed I have made a patch working with older versions of 
HBase first and handled review comments here. Will create new pull request with 
that patch.

Thanks for reviews.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements(Rajesh...

2016-05-16 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/168#discussion_r63463623
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixTransactionalIndexer.java
 ---
@@ -160,6 +163,9 @@ public void 
preBatchMutate(ObserverContext c,
 
 // get the index updates for all elements in this batch
 indexUpdates = getIndexUpdates(c.getEnvironment(), 
indexMetaData, getMutationIterator(miniBatchOp), txRollbackAttribute);
+
+IndexUtil.addLocalUpdatesToCpOperations(c, miniBatchOp, 
indexUpdates,
+m.getDurability() != Durability.SKIP_WAL);
--- End diff --

Yes


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements(Rajesh...

2016-05-16 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/168#discussion_r63463618
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/ParallelWriterIndexCommitter.java
 ---
@@ -116,7 +117,10 @@ public void write(Multimap<HTableInterfaceReference, 
Mutation> toWrite) throws S
 // doing a complete copy over of all the index update for each 
table.
 final List mutations = 
kvBuilder.cloneIfNecessary((List)entry.getValue());
 final HTableInterfaceReference tableReference = entry.getKey();
-final RegionCoprocessorEnvironment env = this.env;
+if (env !=null && tableReference.getTableName().equals(
+env.getRegion().getTableDesc().getNameAsString())) {
+continue;
+}
--- End diff --

Yes we should do that. Currently made a patch without HBASE-15600 so once 
it's ready then will add condition check.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements(Rajesh...

2016-05-16 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/168#discussion_r63463584
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java ---
@@ -1016,7 +1016,7 @@ private MutationState buildIndexAtTimeStamp(PTable 
index, NamedTableNode dataTab
 // connection so that our new index table is visible.
 Properties props = new Properties(connection.getClientInfo());
 props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, 
Long.toString(connection.getSCN()+1));
-PhoenixConnection conn = 
DriverManager.getConnection(connection.getURL(), 
props).unwrap(PhoenixConnection.class);
+PhoenixConnection conn = new PhoenixConnection(connection, 
connection.getQueryServices(), props);
--- End diff --

DriverManager.getConnection has more overhead than initializing the 
PhoenixConnection right that's why changed. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements(Rajesh...

2016-05-16 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/168#discussion_r63463462
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/execute/DelegateHTable.java ---
@@ -297,4 +297,28 @@ public boolean checkAndDelete(byte[] row, byte[] 
family, byte[] qualifier,
return delegate.checkAndDelete(row, family, qualifier, 
compareOp, value, delete);
}
 
+@Override
--- End diff --

These methods required for 1.2.2-SNAPSHOT version so currently not required 
with older versions. will remove it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements(Rajesh...

2016-05-16 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/168#discussion_r63463482
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java ---
@@ -861,7 +871,12 @@ public Put buildUpdateMutation(KeyValueBuilder 
kvBuilder, ValueGetter valueGette
 put.setDurability(!indexWALDisabled ? 
Durability.USE_DEFAULT : Durability.SKIP_WAL);
 }
 //this is a little bit of extra work for installations 
that are running <0.94.14, but that should be rare and is a short-term set of 
wrappers - it shouldn't kill GC
-put.add(kvBuilder.buildPut(rowKey, 
ref.getFamilyWritable(), cq, ts, value));
+if(this.isLocalIndex) {
+ColumnReference columnReference = 
this.coveredColumnsMap.get(ref);
+   put.add(kvBuilder.buildPut(rowKey, 
columnReference.getFamilyWritable(), cq, ts, value));
--- End diff --

Yes James. It's to save regenerating the column family name all the time.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements(Rajesh...

2016-05-16 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/168#discussion_r63463350
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 ---
@@ -202,7 +207,10 @@ protected RegionScanner doPostScannerOpen(final 
ObserverContext

[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements(Rajesh...

2016-05-16 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/168#discussion_r63463302
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/SortMergeJoinIT.java ---
@@ -187,13 +186,13 @@ public void initTable() throws Exception {
 "CREATE LOCAL INDEX \"idx_supplier\" ON " + 
JOIN_SUPPLIER_TABLE_FULL_NAME + " (name)"
 }, {
 "SORT-MERGE-JOIN (LEFT) TABLES\n" +
-"CLIENT PARALLEL 1-WAY RANGE SCAN OVER " + 
MetaDataUtil.LOCAL_INDEX_TABLE_PREFIX + JOIN_SUPPLIER_TABLE_DISPLAY_NAME + " 
[-32768]\n" +
+"CLIENT PARALLEL 1-WAY RANGE SCAN OVER " 
+JOIN_SUPPLIER_TABLE_DISPLAY_NAME + " [1]\n" +
--- End diff --

@JamesRTaylor  Yes agree.  Is this fine "OVER LOCAL INDEX OF DATA_TABLE"? 
or can we include index table name as well?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements(Rajesh...

2016-05-10 Thread chrajeshbabu
GitHub user chrajeshbabu opened a pull request:

https://github.com/apache/phoenix/pull/168

PHOENIX-1734 Local index improvements(Rajeshbabu)

This is the patch for new implementation of local index where we store 
local index data in the separate column families in the same table than 
different table. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chrajeshbabu/phoenix master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/168.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #168


commit 2dbb361551b48b96d8335c12b397a2258c266fd3
Author: Rajeshbabu Chintaguntla <rajeshb...@apache.org>
Date:   2016-05-10T14:00:41Z

PHOENIX-1734 Local index improvements(Rajeshbabu)




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-08 Thread chrajeshbabu
Github user chrajeshbabu commented on the pull request:

https://github.com/apache/phoenix/pull/156#issuecomment-207582047
  
@JamesRTaylor  Thanks for the review. Committed the changes handling the 
review comments.
Refactored the code and added code comments where ever possible. 

Now not handling the special cases required for ChunkedResultItertor. 

bq. Another potential, different approach would be for Phoenix to 
universally handle the split during scan case (rather than letting the HBase 
client scanner handle it for non aggregate case and Phoenix handle it for the 
aggregate case). Would that simplify things?
Now throwing stale region bound exception all the cases when ever we get 
NSRE then phoenix client can handle the NSRE than hbase client handling with 
wrong region boundaries.

The ordered aggregate queries issue you are mentioning can be handled as 
part of other issue.

Please review the latest code.
Thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-08 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r59078097
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/ChunkedResultIterator.java
 ---
@@ -56,6 +57,7 @@
 private final MutationState mutationState;
 private Scan scan;
 private PeekingResultIterator resultIterator;
+private QueryPlan plan;
 
 public static class ChunkedResultIteratorFactory implements 
ParallelIteratorFactory {
--- End diff --

Currently removed the changes required for ChunkedResultIterator. The 
changes in the chunked result iterator are just removing code not required.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-08 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r59077866
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
 ---
@@ -402,8 +405,8 @@ private RegionScanner 
scanUnordered(ObserverContext

[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-08 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r59077569
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java
 ---
@@ -337,6 +384,22 @@ public boolean nextRaw(List result) throws 
IOException {
 arrayElementCell = 
result.get(arrayElementCellPosition);
 }
 if (ScanUtil.isLocalIndex(scan) && 
!ScanUtil.isAnalyzeTable(scan)) {
--- End diff --

We are getting the results after seek only. Will check once again whether 
we can handle this in local index scanner.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58642550
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java 
---
@@ -556,35 +564,55 @@ private static String toString(List<byte[]> gps) {
 } else {
 endKey = regionBoundaries.get(regionIndex);
 }
-HRegionLocation regionLocation = 
regionLocations.get(regionIndex);
-if (isLocalIndex) {
-HRegionInfo regionInfo = 
regionLocation.getRegionInfo();
-endRegionKey = regionInfo.getEndKey();
-keyOffset = 
ScanUtil.getRowKeyOffset(regionInfo.getStartKey(), endRegionKey);
-}
-try {
-while (guideIndex < gpsSize && 
(currentGuidePost.compareTo(endKey) <= 0 || endKey.length == 0)) {
-Scan newScan = scanRanges.intersectScan(scan, 
currentKeyBytes, currentGuidePostBytes, keyOffset,
-false);
-estimatedRows += 
gps.getRowCounts().get(guideIndex);
-estimatedSize += 
gps.getByteCounts().get(guideIndex);
-scans = addNewScan(parallelScans, scans, newScan, 
currentGuidePostBytes, false, regionLocation);
-currentKeyBytes = currentGuidePost.copyBytes();
-currentGuidePost = PrefixByteCodec.decode(decoder, 
input);
-currentGuidePostBytes = 
currentGuidePost.copyBytes();
-guideIndex++;
-}
-} catch (EOFException e) {}
-Scan newScan = scanRanges.intersectScan(scan, 
currentKeyBytes, endKey, keyOffset, true);
-if (isLocalIndex) {
-if (newScan != null) {
-newScan.setAttribute(EXPECTED_UPPER_REGION_KEY, 
endRegionKey);
-} else if (!scans.isEmpty()) {
-
scans.get(scans.size()-1).setAttribute(EXPECTED_UPPER_REGION_KEY, endRegionKey);
-}
-}
-scans = addNewScan(parallelScans, scans, newScan, endKey, 
true, regionLocation);
-currentKeyBytes = endKey;
+if (Bytes.compareTo(scan.getStartRow(), 
context.getScan().getStartRow()) != 0
+ || Bytes.compareTo(scan.getStopRow(), 
context.getScan().getStopRow()) != 0) {
--- End diff --

getParalleScans is the entry point. Let me move it out.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58642188
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/TableResultIterator.java 
---
@@ -107,8 +127,37 @@ public synchronized void close() throws SQLException {
 @Override
 public synchronized Tuple next() throws SQLException {
 initScanner();
-Tuple t = scanIterator.next();
-return t;
+try {
+lastTuple = scanIterator.next();
+if (lastTuple != null) {
+ImmutableBytesWritable ptr = new ImmutableBytesWritable();
+lastTuple.getKey(ptr);
+}
+} catch (SQLException e) {
+try {
+throw ServerUtil.parseServerException(e);
+} catch(StaleRegionBoundaryCacheException e1) {
+if(scan.getAttribute(NON_AGGREGATE_QUERY)!=null) {
+Scan newScan = ScanUtil.newScan(scan);
+if(lastTuple != null) {
+lastTuple.getKey(ptr);
+byte[] startRowSuffix = 
ByteUtil.copyKeyBytesIfNecessary(ptr);
+if(ScanUtil.isLocalIndex(newScan)) {
+newScan.setAttribute(SCAN_START_ROW_SUFFIX, 
ByteUtil.nextKey(startRowSuffix));
+} else {
+
newScan.setStartRow(ByteUtil.nextKey(startRowSuffix));
+}
+}
+
plan.getContext().getConnection().getQueryServices().clearTableRegionCache(htable.getTableName());
+this.scanIterator =
+
plan.iterator(DefaultParallelScanGrouper.getInstance(), newScan);
--- End diff --

Agree with you James. I think I can raise another issue for this and work 
on it. Wdyt?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58641031
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java 
---
@@ -326,11 +325,12 @@ private static void 
optimizeProjection(StatementContext context, Scan scan, PTab
 }
 }
 
-public BaseResultIterators(QueryPlan plan, Integer perScanLimit, 
ParallelScanGrouper scanGrouper) throws SQLException {
+public BaseResultIterators(QueryPlan plan, Integer perScanLimit, 
ParallelScanGrouper scanGrouper, Scan scan) throws SQLException {
 super(plan.getContext(), plan.getTableRef(), plan.getGroupBy(), 
plan.getOrderBy(), plan.getStatement().getHint(), plan.getLimit());
 this.plan = plan;
 this.scanGrouper = scanGrouper;
 StatementContext context = plan.getContext();
+this.scan = scan == null ? context.getScan() : scan;
--- End diff --

This null check is not required will remove it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58640990
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java 
---
@@ -465,7 +465,14 @@ private static String toString(List<byte[]> gps) {
 }
 
 private List<List> getParallelScans() throws SQLException {
-return getParallelScans(EMPTY_BYTE_ARRAY, EMPTY_BYTE_ARRAY);
+if (scan == null
+|| (ScanUtil.isLocalIndex(scan)
+&& 
Bytes.compareTo(context.getScan().getStartRow(), scan.getStartRow()) == 0 && 
Bytes
+.compareTo(context.getScan().getStopRow(), 
scan.getStopRow()) == 0)) {
+return getParallelScans(EMPTY_BYTE_ARRAY, EMPTY_BYTE_ARRAY);
--- End diff --

This check detects whether scan boundaries  are equal to context scan 
boundaries or not. If they are same we are going by getting all parallel scans 
for the table. Will document it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58641009
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java 
---
@@ -465,7 +465,14 @@ private static String toString(List<byte[]> gps) {
 }
 
 private List<List> getParallelScans() throws SQLException {
-return getParallelScans(EMPTY_BYTE_ARRAY, EMPTY_BYTE_ARRAY);
+if (scan == null
+|| (ScanUtil.isLocalIndex(scan)
+&& 
Bytes.compareTo(context.getScan().getStartRow(), scan.getStartRow()) == 0 && 
Bytes
+.compareTo(context.getScan().getStopRow(), 
scan.getStopRow()) == 0)) {
--- End diff --

Sure will move to ScanUtil


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58640818
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java 
---
@@ -556,35 +564,55 @@ private static String toString(List<byte[]> gps) {
 } else {
 endKey = regionBoundaries.get(regionIndex);
 }
-HRegionLocation regionLocation = 
regionLocations.get(regionIndex);
-if (isLocalIndex) {
-HRegionInfo regionInfo = 
regionLocation.getRegionInfo();
-endRegionKey = regionInfo.getEndKey();
-keyOffset = 
ScanUtil.getRowKeyOffset(regionInfo.getStartKey(), endRegionKey);
-}
-try {
-while (guideIndex < gpsSize && 
(currentGuidePost.compareTo(endKey) <= 0 || endKey.length == 0)) {
-Scan newScan = scanRanges.intersectScan(scan, 
currentKeyBytes, currentGuidePostBytes, keyOffset,
-false);
-estimatedRows += 
gps.getRowCounts().get(guideIndex);
-estimatedSize += 
gps.getByteCounts().get(guideIndex);
-scans = addNewScan(parallelScans, scans, newScan, 
currentGuidePostBytes, false, regionLocation);
-currentKeyBytes = currentGuidePost.copyBytes();
-currentGuidePost = PrefixByteCodec.decode(decoder, 
input);
-currentGuidePostBytes = 
currentGuidePost.copyBytes();
-guideIndex++;
-}
-} catch (EOFException e) {}
-Scan newScan = scanRanges.intersectScan(scan, 
currentKeyBytes, endKey, keyOffset, true);
-if (isLocalIndex) {
-if (newScan != null) {
-newScan.setAttribute(EXPECTED_UPPER_REGION_KEY, 
endRegionKey);
-} else if (!scans.isEmpty()) {
-
scans.get(scans.size()-1).setAttribute(EXPECTED_UPPER_REGION_KEY, endRegionKey);
-}
-}
-scans = addNewScan(parallelScans, scans, newScan, endKey, 
true, regionLocation);
-currentKeyBytes = endKey;
+if (Bytes.compareTo(scan.getStartRow(), 
context.getScan().getStartRow()) != 0
+ || Bytes.compareTo(scan.getStopRow(), 
context.getScan().getStopRow()) != 0) {
--- End diff --

If I move creating parallel scans for this special case I need to duplicate 
lot of code. That's why I have added special case as part of creating existing 
parallel scans only.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58640455
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java 
---
@@ -556,35 +564,55 @@ private static String toString(List<byte[]> gps) {
 } else {
 endKey = regionBoundaries.get(regionIndex);
 }
-HRegionLocation regionLocation = 
regionLocations.get(regionIndex);
-if (isLocalIndex) {
-HRegionInfo regionInfo = 
regionLocation.getRegionInfo();
-endRegionKey = regionInfo.getEndKey();
-keyOffset = 
ScanUtil.getRowKeyOffset(regionInfo.getStartKey(), endRegionKey);
-}
-try {
-while (guideIndex < gpsSize && 
(currentGuidePost.compareTo(endKey) <= 0 || endKey.length == 0)) {
-Scan newScan = scanRanges.intersectScan(scan, 
currentKeyBytes, currentGuidePostBytes, keyOffset,
-false);
-estimatedRows += 
gps.getRowCounts().get(guideIndex);
-estimatedSize += 
gps.getByteCounts().get(guideIndex);
-scans = addNewScan(parallelScans, scans, newScan, 
currentGuidePostBytes, false, regionLocation);
-currentKeyBytes = currentGuidePost.copyBytes();
-currentGuidePost = PrefixByteCodec.decode(decoder, 
input);
-currentGuidePostBytes = 
currentGuidePost.copyBytes();
-guideIndex++;
-}
-} catch (EOFException e) {}
-Scan newScan = scanRanges.intersectScan(scan, 
currentKeyBytes, endKey, keyOffset, true);
-if (isLocalIndex) {
-if (newScan != null) {
-newScan.setAttribute(EXPECTED_UPPER_REGION_KEY, 
endRegionKey);
-} else if (!scans.isEmpty()) {
-
scans.get(scans.size()-1).setAttribute(EXPECTED_UPPER_REGION_KEY, endRegionKey);
-}
-}
-scans = addNewScan(parallelScans, scans, newScan, endKey, 
true, regionLocation);
-currentKeyBytes = endKey;
+if (Bytes.compareTo(scan.getStartRow(), 
context.getScan().getStartRow()) != 0
+ || Bytes.compareTo(scan.getStopRow(), 
context.getScan().getStopRow()) != 0) {
+Scan newScan = ScanUtil.newScan(scan);
+if(ScanUtil.isLocalIndex(scan)) {
+newScan.setStartRow(regionInfo.getStartKey());
+newScan.setAttribute(SCAN_ACTUAL_START_ROW, 
regionInfo.getStartKey());
--- End diff --

First one is actual start key of the scan second one is end key of region. 
After this patch we don't need EXPECTED_UPPER_REGION_KEY but kept it for 
compatibility.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58640276
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/TableResultIterator.java 
---
@@ -121,8 +170,21 @@ public synchronized void initScanner() throws 
SQLException {
 this.scanIterator =
 new 
ScanningResultIterator(htable.getScanner(scan), scanMetrics);
 } catch (IOException e) {
-Closeables.closeQuietly(htable);
-throw ServerUtil.parseServerException(e);
+if(handleSplitRegionBoundaryFailureDuringInitialization) {
--- End diff --

This is required for ChunkedResultIterator if we are going to deprecate or 
not use it we don't need these changes. Other than that we might need to deal 
with it in PhoenixRecordReader(currently we skipping range check but for local 
indexes it's compulsory to handle otherwise we might miss one of the daughter 
records)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58639974
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/TableResultIterator.java 
---
@@ -107,8 +127,37 @@ public synchronized void close() throws SQLException {
 @Override
 public synchronized Tuple next() throws SQLException {
 initScanner();
-Tuple t = scanIterator.next();
-return t;
+try {
+lastTuple = scanIterator.next();
+if (lastTuple != null) {
+ImmutableBytesWritable ptr = new ImmutableBytesWritable();
+lastTuple.getKey(ptr);
+}
+} catch (SQLException e) {
+try {
+throw ServerUtil.parseServerException(e);
+} catch(StaleRegionBoundaryCacheException e1) {
+if(scan.getAttribute(NON_AGGREGATE_QUERY)!=null) {
--- End diff --

Will move the check to ScanUtil.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58639989
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/TableResultIterator.java 
---
@@ -107,8 +127,37 @@ public synchronized void close() throws SQLException {
 @Override
 public synchronized Tuple next() throws SQLException {
 initScanner();
-Tuple t = scanIterator.next();
-return t;
+try {
+lastTuple = scanIterator.next();
+if (lastTuple != null) {
+ImmutableBytesWritable ptr = new ImmutableBytesWritable();
+lastTuple.getKey(ptr);
+}
+} catch (SQLException e) {
+try {
+throw ServerUtil.parseServerException(e);
+} catch(StaleRegionBoundaryCacheException e1) {
--- End diff --

I will add.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58639956
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/TableResultIterator.java 
---
@@ -107,8 +127,37 @@ public synchronized void close() throws SQLException {
 @Override
 public synchronized Tuple next() throws SQLException {
 initScanner();
-Tuple t = scanIterator.next();
-return t;
+try {
+lastTuple = scanIterator.next();
+if (lastTuple != null) {
+ImmutableBytesWritable ptr = new ImmutableBytesWritable();
+lastTuple.getKey(ptr);
+}
+} catch (SQLException e) {
+try {
+throw ServerUtil.parseServerException(e);
+} catch(StaleRegionBoundaryCacheException e1) {
+if(scan.getAttribute(NON_AGGREGATE_QUERY)!=null) {
+Scan newScan = ScanUtil.newScan(scan);
+if(lastTuple != null) {
+lastTuple.getKey(ptr);
+byte[] startRowSuffix = 
ByteUtil.copyKeyBytesIfNecessary(ptr);
+if(ScanUtil.isLocalIndex(newScan)) {
+newScan.setAttribute(SCAN_START_ROW_SUFFIX, 
ByteUtil.nextKey(startRowSuffix));
+} else {
+
newScan.setStartRow(ByteUtil.nextKey(startRowSuffix));
--- End diff --

Will raise improvement action for this James.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58639845
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/TableResultIterator.java 
---
@@ -80,13 +94,19 @@
 };
 
 
-public TableResultIterator(MutationState mutationState, TableRef 
tableRef, Scan scan, CombinableMetric scanMetrics, long renewLeaseThreshold) 
throws SQLException {
+public TableResultIterator(MutationState mutationState, Scan scan, 
CombinableMetric scanMetrics, long renewLeaseThreshold, QueryPlan plan) throws 
SQLException {
+this(mutationState, scan, scanMetrics, renewLeaseThreshold, plan, 
false);
+}
+
+public TableResultIterator(MutationState mutationState, Scan scan, 
CombinableMetric scanMetrics, long renewLeaseThreshold, QueryPlan plan, boolean 
handleSplitRegionBoundaryFailureDuringInitialization) throws SQLException {
 this.scan = scan;
 this.scanMetrics = scanMetrics;
-PTable table = tableRef.getTable();
+PTable table = plan.getTableRef().getTable();
 htable = mutationState.getHTable(table);
 this.scanIterator = UNINITIALIZED_SCANNER;
 this.renewLeaseThreshold = renewLeaseThreshold;
+this.plan = plan;
+this.handleSplitRegionBoundaryFailureDuringInitialization = 
handleSplitRegionBoundaryFailureDuringInitialization;
--- End diff --

handleSplitRegionBoundaryFailureDuringInitialization will be true for 
scanning intermediate chunks in ChunkedResultIterator. When we create scanner 
for new chunk we might see stale region boundaries in that case also we need to 
forcibly recreate the iterator with new boundaries. In normal case if we get 
StaleRegionBoundary exception while creating scanner BaseResultIterator handles 
it. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58639358
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/TableResultIterator.java 
---
@@ -107,8 +127,37 @@ public synchronized void close() throws SQLException {
 @Override
 public synchronized Tuple next() throws SQLException {
 initScanner();
-Tuple t = scanIterator.next();
-return t;
+try {
+lastTuple = scanIterator.next();
+if (lastTuple != null) {
+ImmutableBytesWritable ptr = new ImmutableBytesWritable();
+lastTuple.getKey(ptr);
+}
+} catch (SQLException e) {
+try {
+throw ServerUtil.parseServerException(e);
+} catch(StaleRegionBoundaryCacheException e1) {
+if(scan.getAttribute(NON_AGGREGATE_QUERY)!=null) {
+Scan newScan = ScanUtil.newScan(scan);
+if(lastTuple != null) {
+lastTuple.getKey(ptr);
+byte[] startRowSuffix = 
ByteUtil.copyKeyBytesIfNecessary(ptr);
+if(ScanUtil.isLocalIndex(newScan)) {
+newScan.setAttribute(SCAN_START_ROW_SUFFIX, 
ByteUtil.nextKey(startRowSuffix));
+} else {
+
newScan.setStartRow(ByteUtil.nextKey(startRowSuffix));
+}
+}
+
plan.getContext().getConnection().getQueryServices().clearTableRegionCache(htable.getTableName());
+this.scanIterator =
+
plan.iterator(DefaultParallelScanGrouper.getInstance(), newScan);
--- End diff --

You mean we can check the in region boundary again in post scanner open and 
if it's out of range throw stale region boundary exception which will be 
handled by BaseResultIterators?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58638911
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
 ---
@@ -402,8 +405,8 @@ private RegionScanner 
scanUnordered(ObserverContext

[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58638946
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java ---
@@ -385,9 +387,25 @@ public Scan intersectScan(Scan scan, final byte[] 
originalStartKey, final byte[]
 if (scanStopKey.length > 0 && Bytes.compareTo(scanStartKey, 
scanStopKey) >= 0) { 
 return null; 
 }
-newScan.setAttribute(SCAN_ACTUAL_START_ROW, scanStartKey);
-newScan.setStartRow(scanStartKey);
-newScan.setStopRow(scanStopKey);
+if(ScanUtil.isLocalIndex(scan)) {
--- End diff --

I will try moving outside of this method.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58638568
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
 ---
@@ -423,7 +426,14 @@ private RegionScanner 
scanUnordered(ObserverContext

[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58637933
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/TableResultIterator.java 
---
@@ -107,8 +127,37 @@ public synchronized void close() throws SQLException {
 @Override
 public synchronized Tuple next() throws SQLException {
 initScanner();
-Tuple t = scanIterator.next();
-return t;
+try {
+lastTuple = scanIterator.next();
+if (lastTuple != null) {
+ImmutableBytesWritable ptr = new ImmutableBytesWritable();
+lastTuple.getKey(ptr);
+}
+} catch (SQLException e) {
+try {
+throw ServerUtil.parseServerException(e);
+} catch(StaleRegionBoundaryCacheException e1) {
+if(scan.getAttribute(NON_AGGREGATE_QUERY)!=null) {
+Scan newScan = ScanUtil.newScan(scan);
+if(lastTuple != null) {
+lastTuple.getKey(ptr);
+byte[] startRowSuffix = 
ByteUtil.copyKeyBytesIfNecessary(ptr);
+if(ScanUtil.isLocalIndex(newScan)) {
+newScan.setAttribute(SCAN_START_ROW_SUFFIX, 
ByteUtil.nextKey(startRowSuffix));
+} else {
+
newScan.setStartRow(ByteUtil.nextKey(startRowSuffix));
+}
+}
+
plan.getContext().getConnection().getQueryServices().clearTableRegionCache(htable.getTableName());
+this.scanIterator =
+
plan.iterator(DefaultParallelScanGrouper.getInstance(), newScan);
--- End diff --

Yes aggregate queries are already handled properly. This code is only for 
handling splits when we are in the middle of non aggregate queries. If there 
are splits in the starting of the query then we are throwing out the stale 
region exception to BaseResultIterators which handles creating the proper 
parallel scans.
+if(scan.getAttribute(NON_AGGREGATE_QUERY)!=null) {



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58632305
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java ---
@@ -385,9 +387,25 @@ public Scan intersectScan(Scan scan, final byte[] 
originalStartKey, final byte[]
 if (scanStopKey.length > 0 && Bytes.compareTo(scanStartKey, 
scanStopKey) >= 0) { 
 return null; 
 }
-newScan.setAttribute(SCAN_ACTUAL_START_ROW, scanStartKey);
-newScan.setStartRow(scanStartKey);
-newScan.setStopRow(scanStopKey);
+if(ScanUtil.isLocalIndex(scan)) {
--- End diff --

@JamesRTaylor  we cannot use keyoffset > 0 check always for considering 
local indexes scan because in special case like if a table has only one region 
then region start key length and end key length is zero so keyoffset also 
becoming zero. In that case we need to check local index scan or not to set the 
attributes properly. Wdyt? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-04 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58485551
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java ---
@@ -385,9 +387,25 @@ public Scan intersectScan(Scan scan, final byte[] 
originalStartKey, final byte[]
 if (scanStopKey.length > 0 && Bytes.compareTo(scanStartKey, 
scanStopKey) >= 0) { 
 return null; 
 }
-newScan.setAttribute(SCAN_ACTUAL_START_ROW, scanStartKey);
-newScan.setStartRow(scanStartKey);
-newScan.setStopRow(scanStopKey);
+if(ScanUtil.isLocalIndex(scan)) {
--- End diff --

Sure will try to remove the local index scan check.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-04 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58485360
  
--- Diff: 
phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
 ---
@@ -157,6 +162,7 @@ public Reader 
preStoreFileReaderOpen(ObserverContext

[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-03-29 Thread chrajeshbabu
GitHub user chrajeshbabu opened a pull request:

https://github.com/apache/phoenix/pull/156

PHOENIX-2628 Ensure split when iterating through results handled corr…

The patch fixes issues with splits and merges while scanning local indexes. 
 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chrajeshbabu/phoenix master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/156.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #156


commit d2e4d166dc2b32ed725183fc4465378029bd7834
Author: Rajeshbabu Chintaguntla <rajeshb...@apache.org>
Date:   2016-03-29T16:45:02Z

PHOENIX-2628 Ensure split when iterating through results handled 
correctly(Rajeshbabu)




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements

2015-11-29 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/135#discussion_r46096442
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java ---
@@ -134,7 +143,36 @@ private static void setValues(byte[][] values, int[] 
pkSlotIndex, int[] columnIn
 }
 }
 ImmutableBytesPtr ptr = new ImmutableBytesPtr();
-table.newKey(ptr, pkValues);
+if(table.getIndexType()==IndexType.LOCAL) {
--- End diff --

We are preparing index updates server side only. But I have added this to 
prepare proper local index mutations when we run upsert into index table 
directly in any case if required. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements

2015-11-29 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/135#discussion_r46096395
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/HashJoinIT.java ---
@@ -873,7 +873,7 @@ public void initTable() throws Exception {
 "SERVER AGGREGATE INTO DISTINCT ROWS BY 
[\"I.0:NAME\"]\n" +
 "CLIENT MERGE SORT\n" +
 "PARALLEL LEFT-JOIN TABLE 0\n" +
-"CLIENT PARALLEL 1-WAY RANGE SCAN OVER " + 
MetaDataUtil.LOCAL_INDEX_TABLE_PREFIX + "" + JOIN_ITEM_TABLE_DISPLAY_NAME +" 
[-32768]\n" +
+"CLIENT PARALLEL 1-WAY RANGE SCAN OVER " 
+JOIN_ITEM_TABLE_DISPLAY_NAME +" [-32768]\n" +
--- End diff --

Yes it's using local indexes but it's confusing because we are having same 
table name. Only difference here is index id.  
What do you think of changing it to logical name in ExplainTable?
buf.append("OVER " + 
tableRef.getTable().getPhysicalName().getString());


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements

2015-11-29 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/135#discussion_r46096275
  
--- Diff: phoenix-core/src/it/java/org/apache/phoenix/end2end/DeleteIT.java 
---
@@ -186,7 +186,9 @@ private void testDeleteRange(boolean autoCommit, 
boolean createIndex, boolean lo
 PreparedStatement stmt;
 conn.setAutoCommit(autoCommit);
 deleteStmt = "DELETE FROM IntIntKeyTest WHERE i >= ? and i < ?";
-assertIndexUsed(conn, deleteStmt, Arrays.asList(5,10), 
indexName, false);
+if(!local) {
--- End diff --

It will be used. In case of local indexes explain plan always have "SCAN 
OVER DATATABLENAME" so it's failing so not checking assertion for local indexes 
case. Will modify it make proper assertion in case of local indexes like it 
should have index id as well.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements

2015-11-29 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/135#discussion_r46096413
  
--- Diff: 
phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
 ---
@@ -80,6 +81,9 @@ public Reader 
preStoreFileReaderOpen(ObserverContext

[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements

2015-11-29 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/135#discussion_r46096498
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 ---
@@ -158,6 +162,10 @@ public RegionScanner 
preScannerOpen(ObserverContext

[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements

2015-11-29 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/135#discussion_r46111214
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 ---
@@ -158,6 +162,10 @@ public RegionScanner 
preScannerOpen(ObserverContext

[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements

2015-11-29 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/135#discussion_r46111265
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java ---
@@ -134,7 +143,36 @@ private static void setValues(byte[][] values, int[] 
pkSlotIndex, int[] columnIn
 }
 }
 ImmutableBytesPtr ptr = new ImmutableBytesPtr();
-table.newKey(ptr, pkValues);
+if(table.getIndexType()==IndexType.LOCAL) {
--- End diff --

bq.  so I don't think we should include this code.
Will remove in the next patch.

bq. We've talked about having a kind of "scrutiny" process that scans the 
data table and ensures that all the corresponding indexes have the correct rows 
(I filed PHOENIX-2460 for this). 
+1 on this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements

2015-11-25 Thread chrajeshbabu
GitHub user chrajeshbabu opened a pull request:

https://github.com/apache/phoenix/pull/135

PHOENIX-1734 Local index improvements

Patch supports storing local indexing data in the same data table.
1) Removed code used HBase internals in balancer, split and merge.
2) Create index create column families suffix with L#  for data column 
families.
3) Changes in read and write path to use column families prefixed with L# 
for local indexes.
4) Done changes to tests.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chrajeshbabu/phoenix master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/135.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #135


commit 4e663a2479adbf3e41826f40c1b2ed6bb69d7634
Author: Rajeshbabu Chintaguntla <rajeshb...@apache.org>
Date:   2015-11-25T16:33:33Z

PHOENIX-1734 Local index improvements(Rajeshbabu)




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-28 Thread chrajeshbabu
Github user chrajeshbabu commented on the pull request:

https://github.com/apache/phoenix/pull/77#issuecomment-96969719
  
It's committed. Hence closing.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-28 Thread chrajeshbabu
Github user chrajeshbabu closed the pull request at:

https://github.com/apache/phoenix/pull/77


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2015-04-28 Thread chrajeshbabu
Github user chrajeshbabu closed the pull request at:

https://github.com/apache/phoenix/pull/3


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-25 Thread chrajeshbabu
Github user chrajeshbabu commented on the pull request:

https://github.com/apache/phoenix/pull/77#issuecomment-96276745
  
Thanks @JamesRTaylor  @samarthjain 
I have addressed the review comments and added to pull request. If it's ok 
I will commit this tomorrow morning IST and work on subtasks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-25 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/77#discussion_r29103983
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/UDFExpression.java
 ---
@@ -0,0 +1,217 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.expression.function;
+
+import static org.apache.phoenix.query.QueryServices.DYNAMIC_JARS_DIR_KEY;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.util.List;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.locks.Lock;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.DynamicClassLoader;
+import org.apache.hadoop.hbase.util.KeyLocker;
+import org.apache.hadoop.io.WritableUtils;
+import org.apache.phoenix.compile.KeyPart;
+import org.apache.phoenix.expression.Expression;
+import org.apache.phoenix.expression.visitor.ExpressionVisitor;
+import org.apache.phoenix.parse.PFunction;
+import org.apache.phoenix.schema.PName;
+import org.apache.phoenix.schema.PNameFactory;
+import org.apache.phoenix.schema.tuple.Tuple;
+import org.apache.phoenix.schema.types.PDataType;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.collect.MapMaker;
+
+public class UDFExpression extends ScalarFunction {
--- End diff --

Added this James. Thanks for alternative soln.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-25 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/77#discussion_r29103995
  
--- Diff: phoenix-core/src/main/antlr3/PhoenixSQL.g ---
@@ -114,6 +114,15 @@ tokens
 ASYNC='async';
 SAMPLING='sampling';
 UNION='union';
+FUNCTION='function';
+AS='as';
+REPLACE='replace';
--- End diff --

Removed replace currently.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-25 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/77#discussion_r29098669
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 ---
@@ -700,6 +1007,29 @@ private PTable loadTable(RegionCoprocessorEnvironment 
env, byte[] key,
 return null;
 }
 
+private PFunction loadFunction(RegionCoprocessorEnvironment env, 
byte[] key,
+ImmutableBytesPtr cacheKey, long clientTimeStamp, long 
asOfTimeStamp)
+throws IOException, SQLException {
+HRegion region = env.getRegion();
+CacheImmutableBytesPtr,PMetaDataEntity metaDataCache = 
GlobalCache.getInstance(this.env).getMetaDataCache();
+PFunction function = 
(PFunction)metaDataCache.getIfPresent(cacheKey);
+// We always cache the latest version - fault in if not in 
cache
+if (function != null) {
+return function;
+}
+ArrayListbyte[] arrayList = new ArrayListbyte[](1);
+arrayList.add(key);
+ListPFunction functions = buildFunctions(arrayList, region, 
asOfTimeStamp);
+if(functions != null) return functions.get(0);
+// if not found then check if newer table already exists and 
add delete marker for timestamp
--- End diff --

yes


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-25 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/77#discussion_r29098659
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/cache/GlobalCache.java ---
@@ -157,4 +159,18 @@ public TenantCache 
getChildTenantCache(ImmutableBytesWritable tenantId) {
 }
 return tenantCache;
 }
+
+public static class FunctionBytesPtr extends ImmutableBytesPtr {
+
+public FunctionBytesPtr(byte[] key) {
+super(key);
+}
+
+@Override
+public boolean equals(Object obj) {
--- End diff --

I will override the hashcode and just call super.hashCode() then we won't 
get checkstyle or findbug warning.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-25 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/77#discussion_r29098664
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/CreateFunctionCompiler.java
 ---
@@ -0,0 +1,84 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.compile;
+
+import java.sql.ParameterMetaData;
+import java.sql.SQLException;
+import java.sql.SQLFeatureNotSupportedException;
+import java.util.Collections;
+
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.phoenix.execute.MutationState;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixStatement;
+import org.apache.phoenix.parse.CreateFunctionStatement;
+import org.apache.phoenix.schema.MetaDataClient;
+
+public class CreateFunctionCompiler {
+
+private final PhoenixStatement statement;
+
+public CreateFunctionCompiler(PhoenixStatement statement) {
+this.statement = statement;
+}
+
+public MutationPlan compile(final CreateFunctionStatement create) 
throws SQLException {
+if(create.isReplace()) {
+throw new SQLFeatureNotSupportedException();
+}
+final PhoenixConnection connection = statement.getConnection();
+PhoenixConnection connectionToBe = connection;
+Scan scan = new Scan();
+final StatementContext context = new StatementContext(statement, 
FromCompiler.EMPTY_TABLE_RESOLVER, scan, new SequenceManager(statement));
--- End diff --

That would be great. Will change.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-25 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/77#discussion_r29098873
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/UDFExpression.java
 ---
@@ -0,0 +1,217 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.expression.function;
+
+import static org.apache.phoenix.query.QueryServices.DYNAMIC_JARS_DIR_KEY;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.util.List;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.locks.Lock;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.DynamicClassLoader;
+import org.apache.hadoop.hbase.util.KeyLocker;
+import org.apache.hadoop.io.WritableUtils;
+import org.apache.phoenix.compile.KeyPart;
+import org.apache.phoenix.expression.Expression;
+import org.apache.phoenix.expression.visitor.ExpressionVisitor;
+import org.apache.phoenix.parse.PFunction;
+import org.apache.phoenix.schema.PName;
+import org.apache.phoenix.schema.PNameFactory;
+import org.apache.phoenix.schema.tuple.Tuple;
+import org.apache.phoenix.schema.types.PDataType;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.collect.MapMaker;
+
+public class UDFExpression extends ScalarFunction {
--- End diff --

if (expression.getDeterminism() != Determinism.ALWAYS) {
throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.NON_DETERMINISTIC_EXPRESSION_NOT_ALLOWED_IN_INDEX).build().buildException();
}

Here it's the reason for failure James...getting problem with functional 
indexes only.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-25 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/77#discussion_r29098680
  
--- Diff: 
phoenix-core/src/test/java/org/apache/phoenix/parse/QueryParserTest.java ---
@@ -289,24 +289,6 @@ public void testNegativeCountStar() throws Exception {
 }
 
 @Test
-public void testUnknownFunction() throws Exception {
--- End diff --

Now the exception will be thrown during compilation not while parsing 
because while parsing not able check whether udf enabled or not. That's why 
removed it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-25 Thread chrajeshbabu
Github user chrajeshbabu commented on the pull request:

https://github.com/apache/phoenix/pull/77#issuecomment-96145601
  
Thanks Samarth for reviews. Will update patch addressing the comments.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-25 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/77#discussion_r29098678
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/UDFExpression.java
 ---
@@ -0,0 +1,217 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.expression.function;
+
+import static org.apache.phoenix.query.QueryServices.DYNAMIC_JARS_DIR_KEY;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.util.List;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.locks.Lock;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.DynamicClassLoader;
+import org.apache.hadoop.hbase.util.KeyLocker;
+import org.apache.hadoop.io.WritableUtils;
+import org.apache.phoenix.compile.KeyPart;
+import org.apache.phoenix.expression.Expression;
+import org.apache.phoenix.expression.visitor.ExpressionVisitor;
+import org.apache.phoenix.parse.PFunction;
+import org.apache.phoenix.schema.PName;
+import org.apache.phoenix.schema.PNameFactory;
+import org.apache.phoenix.schema.tuple.Tuple;
+import org.apache.phoenix.schema.types.PDataType;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.collect.MapMaker;
+
+public class UDFExpression extends ScalarFunction {
+
+private static Configuration config = HBaseConfiguration.create();
+
+private static final ConcurrentMapPName, DynamicClassLoader 
tenantIdSpecificCls =
+new MapMaker().concurrencyLevel(3).weakValues().makeMap();
+
+private static final ConcurrentMapString, DynamicClassLoader 
pathSpecificCls =
+new MapMaker().concurrencyLevel(3).weakValues().makeMap();
+
+public static final Log LOG = LogFactory.getLog(UDFExpression.class);
+
+/**
+ * A locker used to synchronize class loader initialization per tenant 
id.
+ */
+private static final KeyLockerString locker = new 
KeyLockerString();
+
+/**
+ * A locker used to synchronize class loader initialization per jar 
path.
+ */
+private static final KeyLockerString pathLocker = new 
KeyLockerString();
+
+private PName tenantId;
+private String functionClassName;
+private String jarPath;
+private ScalarFunction udfFunction;
+
+public UDFExpression() {
+}
+
+public UDFExpression(ListExpression children,PFunction functionInfo) 
{
+super(children);
+this.tenantId =
+functionInfo.getTenantId() == null ? PName.EMPTY_NAME : 
functionInfo.getTenantId();
+this.functionClassName = functionInfo.getClassName();
+this.jarPath = functionInfo.getJarPath();
+constructUDFFunction();
+}
+
+@Override
+public boolean evaluate(Tuple tuple, ImmutableBytesWritable ptr) {
+return udfFunction.evaluate(tuple, ptr);
+}
+
+@Override
+public T T accept(ExpressionVisitorT visitor) {
+return udfFunction.accept(visitor);
+}
+
+@Override
+public PDataType getDataType() {
+return udfFunction.getDataType();
+}
+
+@Override
+public String getName() {
+return udfFunction.getName();
+}
+
+@Override
+public OrderPreserving preservesOrder() {
+return udfFunction.preservesOrder();
+}
+
+@Override
+public KeyPart newKeyPart(KeyPart childPart

[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-25 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/77#discussion_r29098894
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/UDFExpression.java
 ---
@@ -0,0 +1,217 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.expression.function;
+
+import static org.apache.phoenix.query.QueryServices.DYNAMIC_JARS_DIR_KEY;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.util.List;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.locks.Lock;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.DynamicClassLoader;
+import org.apache.hadoop.hbase.util.KeyLocker;
+import org.apache.hadoop.io.WritableUtils;
+import org.apache.phoenix.compile.KeyPart;
+import org.apache.phoenix.expression.Expression;
+import org.apache.phoenix.expression.visitor.ExpressionVisitor;
+import org.apache.phoenix.parse.PFunction;
+import org.apache.phoenix.schema.PName;
+import org.apache.phoenix.schema.PNameFactory;
+import org.apache.phoenix.schema.tuple.Tuple;
+import org.apache.phoenix.schema.types.PDataType;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.collect.MapMaker;
+
+public class UDFExpression extends ScalarFunction {
--- End diff --

java.sql.SQLException: ERROR 521 (42898): Non-deterministic expression not 
allowed in an index
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:386)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at 
org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1179)
at 
org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:95)
at 
org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:303)
at 
org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:294)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1247)
at 
org.apache.phoenix.end2end.UserDefinedFunctionsIT.testFunctionalIndexesWithUDFFunction(UserDefinedFunctionsIT.java:509)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-25 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/77#discussion_r29098632
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java ---
@@ -383,6 +448,85 @@ protected TableRef createTableRef(NamedTableNode 
tableNode, boolean updateCacheI
 return tableRef;
 }
 
+@Override
+public ListPFunction getFunctions() {
+return functions;
+}
+
+protected ListPFunction createFunctionRef(ListString 
functionNames, boolean updateCacheImmediately) throws SQLException {
--- End diff --

no will change it to private. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-24 Thread chrajeshbabu
GitHub user chrajeshbabu opened a pull request:

https://github.com/apache/phoenix/pull/77

PHOENIX-538 Support UDFs

Patch to support UDFs. It mainly includes
- create temporary/permanent function query parsing
- storing function info
- dynamically loading udf jars.
- resolve functions 
- making use of udfs in different queries
- drop function.
- it tests

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chrajeshbabu/phoenix master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/77.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #77


commit a0c62d52492167b0d7c3d7b2036de8acfb762d92
Author: Rajeshbabu Chintaguntla rajeshb...@apache.org
Date:   2015-04-24T23:03:55Z

PHOENIX-538 Support UDFs




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-14 Thread chrajeshbabu
Github user chrajeshbabu commented on the pull request:

https://github.com/apache/phoenix/pull/3#issuecomment-48945719
  
bq. Just add the check to disable creating local indexes on a table with 
immutable rows and then let's check this in. 
Changed pull request to disallow local index on immutable rows and added 
some test cases.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-14 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/3#discussion_r14897879
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/expression/ExpressionType.java ---
@@ -169,6 +170,7 @@
 SQLViewTypeFunction(SQLViewTypeFunction.class),
 ExternalSqlTypeIdFunction(ExternalSqlTypeIdFunction.class),
 ConvertTimezoneFunction(ConvertTimezoneFunction.class),
+SQLIndexTypeFunction(SQLIndexTypeFunction.class),
--- End diff --

done.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-14 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/3#discussion_r14898287
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/DefaultParallelIteratorRegionSplitter.java
 ---
@@ -140,7 +142,14 @@ public boolean apply(HRegionLocation location) {
 // distributed across regions, using this scheme compensates for 
regions that
 // have more rows than others, by applying tighter splits and 
therefore spawning
 // off more scans over the overloaded regions.
-int splitsPerRegion = regions.size() = targetConcurrency ? 1 : 
(regions.size()  targetConcurrency / 2 ? maxConcurrency : targetConcurrency) / 
regions.size();
+PTable table = tableRef.getTable();
--- End diff --

bq. why do we need this to be different for local indexes? Seems like we 
could run parallel scans over part of each region just like we do with other 
scans, no?
Index table rows have region start key as prefix so even if we split region 
key ranges into multiple and scan parallelly,only first scanner gives the 
results and remaining all the scanners just return nothing. So number of splits 
of local index region setting to 1.
if we want to start multiple scanners for local index into multiple parts 
then we need to split the scan ranges into multiple.  


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-14 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/3#discussion_r14898383
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/DefaultParallelIteratorRegionSplitter.java
 ---
@@ -140,7 +142,14 @@ public boolean apply(HRegionLocation location) {
 // distributed across regions, using this scheme compensates for 
regions that
 // have more rows than others, by applying tighter splits and 
therefore spawning
 // off more scans over the overloaded regions.
-int splitsPerRegion = regions.size() = targetConcurrency ? 1 : 
(regions.size()  targetConcurrency / 2 ? maxConcurrency : targetConcurrency) / 
regions.size();
+PTable table = tableRef.getTable();
--- End diff --

bq. getSplitsPerRegion method to ParallelIteratorRegionSplitter so we can 
move this special case to your new implementation for local indexes.
Current patch added getSplitsPerRegion method to 
ParallelIteratorRegionSplitter and returning one in 
LocalIndexParallelIteratorRegionSplitter  and removed the changes in other 
splitter.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-14 Thread chrajeshbabu
Github user chrajeshbabu commented on the pull request:

https://github.com/apache/phoenix/pull/3#issuecomment-48949296
  
Resolved the conflicts after PHOENIX-1002 also. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-13 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/1#discussion_r14858124
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/SkipRangeParallelIteratorRegionSplitter.java
 ---
@@ -54,7 +55,8 @@ protected 
SkipRangeParallelIteratorRegionSplitter(StatementContext context, Tabl
 
 public ListHRegionLocation filterRegions(ListHRegionLocation 
allTableRegions, final ScanRanges ranges) {
 IterableHRegionLocation regions;
-if (ranges == ScanRanges.EVERYTHING) {
+if (ranges == ScanRanges.EVERYTHING
--- End diff --

Even with the change skip scan will be used James. The change is required 
because the key ranges generated by compiler won't be in the local index 
regions key range because local index rows have prefixed region start key 
extra. Without the change mostly no region will be selected for scanning.

QueryIT#testSimpleInListStatement is the test case verifies the same. 
Here is the explain query result.

CLIENT PARALLEL 4-WAY SKIP SCAN ON 2 KEYS OVER _LOCAL_IDX_ATABLE [-32768,2] 
- [-32768,4]
SERVER FILTER BY FIRST KEY ONLY AND ORGANIZATION_ID = '00D3XHP'
CLIENT MERGE SORT


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-13 Thread chrajeshbabu
Github user chrajeshbabu commented on the pull request:

https://github.com/apache/phoenix/pull/1#issuecomment-48839233
  
bq. Cleanest might be to just implement a simple 
ParallelIteratorRegionSplitter for use when a local index is used that just 
returns all regions:
I will add new ParallelIteratorRegionSplitter for local index and remove 
the unnecessary changes in 
SkipRangeParallelIteratorRegionSplitter/DefaultParallelIteratorRegionSplitter. 

Then I will submit another pull request. 

Thanks @JamesRTaylor 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-13 Thread chrajeshbabu
GitHub user chrajeshbabu opened a pull request:

https://github.com/apache/phoenix/pull/3

PHOENIX-933 Local index support to Phoenix

Updated pull request after resolving conflicts and handling James review 
comments.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chrajeshbabu/phoenix master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/3.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3


commit 014ec218b151ef960347b875b373828ad8fe8681
Author: James Taylor jamestay...@apache.org
Date:   2014-04-17T01:28:18Z

Add local index to grammar and metadata

commit 1855a8d1c6b2064d81f5fffc27c18342599cf7b0
Author: James Taylor jamestay...@apache.org
Date:   2014-04-18T23:31:29Z

PHOENIX-936 Custom load balancer to colocate user table regions and index 
table regions (Rajeshbabu)

commit 1bc8b57801e299635ac601aea5dc7ba8d4915a2f
Author: James Taylor jamestay...@apache.org
Date:   2014-04-26T04:36:03Z

PHOENIX-935 create local index table with the same split keys of user table 
(Rajeshbabu)

commit b02df4f054f78a866f25d9beb4e7fc2242df2a1f
Author: James Taylor jamestay...@apache.org
Date:   2014-04-28T05:26:27Z

PHOENIX-935 create local index table with the same split keys of user table 
(Rajeshbabu)

commit 8f3f8100210538d0c162cbbc5e02efc63db52d31
Author: James Taylor jamestay...@apache.org
Date:   2014-04-28T16:31:31Z

PHOENIX-955 Skip region start key at beginning of local index rows 
(JamesTaylor)

commit 8e35bc99068da33db4e2019d9bc65d8d78bee4d2
Author: James Taylor jtay...@salesforce.com
Date:   2014-05-08T06:23:37Z

PHOENIX-937 Handle puts on local index table (Rajeshbabu)

commit cf78def0579f51a5342bcfbff802902478052d25
Author: James Taylor jtay...@salesforce.com
Date:   2014-05-23T03:48:01Z

PHOENIX-994 Handle scans on local index table in case any best fit covering 
local index available (Rajeshbabu)

commit 835bcf675db9435da1335bbfb6d55d6d7edd86b3
Author: James Taylor jtay...@salesforce.com
Date:   2014-05-27T19:51:56Z

PHOENIX-1004 'drop index' should delete index data from local index table 
(Rajeshbabu)

commit 2edd2b6a19deef0847086da2c7b30250ed2907c7
Author: James Taylor jtay...@salesforce.com
Date:   2014-06-09T18:11:35Z

PHOENIX-1038 Dynamically add INDEX_TYPE column to SYSTEM.CATALOG if not 
already there

commit 0408b333780f4ce428e5d9b018e5cd13934d0f84
Author: Ramkrishna ramkrishna.s.vasude...@intel.com
Date:   2014-07-04T10:19:41Z

PHOENIX-1015 Support joining back to data table row from local index when 
query condition involves leading columns in local index (Rajeshbabu)

commit 3df832e73e138320a30e5c5d0c80680b79de6358
Author: Rajeshbabu Chintaguntla rajeshbabu.chintagun...@huawei.com
Date:   2014-07-12T17:14:44Z

PHOENIX-1015 Support joining back to data table row from local index when 
query condition involves leading columns in local index

commit 84097757a3173adc2a3b464901b79f35bc9d07cb
Author: Rajeshbabu Chintaguntla rajeshbabu.chintagun...@huawei.com
Date:   2014-07-13T17:26:13Z

PHOENIX-1015 Support joining back to data table row from local index when 
query condition involves leading columns in local index




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-12 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/1#discussion_r14853783
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/ScanRegionObserver.java
 ---
@@ -100,7 +109,7 @@ public static void serializeIntoScan(Scan scan, int 
thresholdBytes, int limit, L
 }
 }
 
-public static OrderedResultIterator deserializeFromScan(Scan scan, 
RegionScanner s) {
+public static OrderedResultIterator deserializeFromScan(Scan scan, 
RegionScanner s, int offset) {
--- End diff --

bq. There's a bit more you need to do to handle ORDER BY correctly. It'd be 
for the case in which a data column was referenced in the ORDER BY while the 
index table is being used to satisfy the query.
This is working fine James. I have added test case. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-11 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/1#discussion_r14834867
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/CreateIndexCompiler.java 
---
@@ -47,6 +51,21 @@ public MutationPlan compile(final CreateIndexStatement 
create) throws SQLExcepti
 final StatementContext context = new StatementContext(statement, 
resolver, scan);
 ExpressionCompiler expressionCompiler = new 
ExpressionCompiler(context);
 ListParseNode splitNodes = create.getSplitNodes();
+if (create.getIndexType() == IndexType.LOCAL) {
+if (!splitNodes.isEmpty()) {
+throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.CANNOT_SPLIT_LOCAL_INDEX)
+.build().buildException();
+} 
+if (create.getProps() != null  create.getProps().get() != 
null) {
+ListPairString, Object list = 
create.getProps().get();
--- End diff --

corrected.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-11 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/1#discussion_r14834896
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/ExpressionCompiler.java 
---
@@ -282,7 +286,7 @@ public Expression visitLeave(FunctionParseNode node, 
ListExpression children)
 children = node.validate(children, context);
 Expression expression = node.create(children, context);
 ImmutableBytesWritable ptr = context.getTempPtr();
-if (node.isStateless()) {
+if (node.isStateless()  expression.isDeterministic()) {
--- End diff --

Yes James. This change already there in master branch


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-11 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/1#discussion_r14834954
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/IndexStatementRewriter.java
 ---
@@ -96,6 +96,12 @@ public ParseNode visit(ColumnParseNode node) throws 
SQLException {
 
 String indexColName = IndexUtil.getIndexColumnName(dataCol);
 // Same alias as before, but use the index column name instead of 
the data column name
+// TODO: add dataColRef as an alternate ColumnParseNode in the 
case that the index
--- End diff --

removed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-11 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/1#discussion_r14850777
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
 ---
@@ -366,6 +384,21 @@ private RegionScanner 
scanUnordered(ObserverContextRegionCoprocessorEnvironment
 env, ScanUtil.getTenantId(scan), 
 aggregators, estDistVals);
 
+byte[] localIndexBytes = scan.getAttribute(LOCAL_INDEX_BUILD);
--- End diff --

Moved the changes outside of scanOrdered/scanUnordered and passing through 
necessary info through calls.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---