[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-24 Thread chrajeshbabu
GitHub user chrajeshbabu opened a pull request:

https://github.com/apache/phoenix/pull/77

PHOENIX-538 Support UDFs

Patch to support UDFs. It mainly includes
- create temporary/permanent function query parsing
- storing function info
- dynamically loading udf jars.
- resolve functions 
- making use of udfs in different queries
- drop function.
- it tests

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chrajeshbabu/phoenix master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/77.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #77


commit a0c62d52492167b0d7c3d7b2036de8acfb762d92
Author: Rajeshbabu Chintaguntla 
Date:   2015-04-24T23:03:55Z

PHOENIX-538 Support UDFs




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-25 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/77#discussion_r29098632
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java ---
@@ -383,6 +448,85 @@ protected TableRef createTableRef(NamedTableNode 
tableNode, boolean updateCacheI
 return tableRef;
 }
 
+@Override
+public List getFunctions() {
+return functions;
+}
+
+protected List createFunctionRef(List 
functionNames, boolean updateCacheImmediately) throws SQLException {
--- End diff --

no will change it to private. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-25 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/77#discussion_r29098664
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/CreateFunctionCompiler.java
 ---
@@ -0,0 +1,84 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.compile;
+
+import java.sql.ParameterMetaData;
+import java.sql.SQLException;
+import java.sql.SQLFeatureNotSupportedException;
+import java.util.Collections;
+
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.phoenix.execute.MutationState;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixStatement;
+import org.apache.phoenix.parse.CreateFunctionStatement;
+import org.apache.phoenix.schema.MetaDataClient;
+
+public class CreateFunctionCompiler {
+
+private final PhoenixStatement statement;
+
+public CreateFunctionCompiler(PhoenixStatement statement) {
+this.statement = statement;
+}
+
+public MutationPlan compile(final CreateFunctionStatement create) 
throws SQLException {
+if(create.isReplace()) {
+throw new SQLFeatureNotSupportedException();
+}
+final PhoenixConnection connection = statement.getConnection();
+PhoenixConnection connectionToBe = connection;
+Scan scan = new Scan();
+final StatementContext context = new StatementContext(statement, 
FromCompiler.EMPTY_TABLE_RESOLVER, scan, new SequenceManager(statement));
--- End diff --

That would be great. Will change.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-25 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/77#discussion_r29098659
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/cache/GlobalCache.java ---
@@ -157,4 +159,18 @@ public TenantCache 
getChildTenantCache(ImmutableBytesWritable tenantId) {
 }
 return tenantCache;
 }
+
+public static class FunctionBytesPtr extends ImmutableBytesPtr {
+
+public FunctionBytesPtr(byte[] key) {
+super(key);
+}
+
+@Override
+public boolean equals(Object obj) {
--- End diff --

I will override the hashcode and just call super.hashCode() then we won't 
get checkstyle or findbug warning.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-25 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/77#discussion_r29098669
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 ---
@@ -700,6 +1007,29 @@ private PTable loadTable(RegionCoprocessorEnvironment 
env, byte[] key,
 return null;
 }
 
+private PFunction loadFunction(RegionCoprocessorEnvironment env, 
byte[] key,
+ImmutableBytesPtr cacheKey, long clientTimeStamp, long 
asOfTimeStamp)
+throws IOException, SQLException {
+HRegion region = env.getRegion();
+Cache metaDataCache = 
GlobalCache.getInstance(this.env).getMetaDataCache();
+PFunction function = 
(PFunction)metaDataCache.getIfPresent(cacheKey);
+// We always cache the latest version - fault in if not in 
cache
+if (function != null) {
+return function;
+}
+ArrayList arrayList = new ArrayList(1);
+arrayList.add(key);
+List functions = buildFunctions(arrayList, region, 
asOfTimeStamp);
+if(functions != null) return functions.get(0);
+// if not found then check if newer table already exists and 
add delete marker for timestamp
--- End diff --

yes


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-25 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/77#discussion_r29098678
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/UDFExpression.java
 ---
@@ -0,0 +1,217 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.expression.function;
+
+import static org.apache.phoenix.query.QueryServices.DYNAMIC_JARS_DIR_KEY;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.util.List;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.locks.Lock;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.DynamicClassLoader;
+import org.apache.hadoop.hbase.util.KeyLocker;
+import org.apache.hadoop.io.WritableUtils;
+import org.apache.phoenix.compile.KeyPart;
+import org.apache.phoenix.expression.Expression;
+import org.apache.phoenix.expression.visitor.ExpressionVisitor;
+import org.apache.phoenix.parse.PFunction;
+import org.apache.phoenix.schema.PName;
+import org.apache.phoenix.schema.PNameFactory;
+import org.apache.phoenix.schema.tuple.Tuple;
+import org.apache.phoenix.schema.types.PDataType;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.collect.MapMaker;
+
+public class UDFExpression extends ScalarFunction {
+
+private static Configuration config = HBaseConfiguration.create();
+
+private static final ConcurrentMap 
tenantIdSpecificCls =
+new MapMaker().concurrencyLevel(3).weakValues().makeMap();
+
+private static final ConcurrentMap 
pathSpecificCls =
+new MapMaker().concurrencyLevel(3).weakValues().makeMap();
+
+public static final Log LOG = LogFactory.getLog(UDFExpression.class);
+
+/**
+ * A locker used to synchronize class loader initialization per tenant 
id.
+ */
+private static final KeyLocker locker = new 
KeyLocker();
+
+/**
+ * A locker used to synchronize class loader initialization per jar 
path.
+ */
+private static final KeyLocker pathLocker = new 
KeyLocker();
+
+private PName tenantId;
+private String functionClassName;
+private String jarPath;
+private ScalarFunction udfFunction;
+
+public UDFExpression() {
+}
+
+public UDFExpression(List children,PFunction functionInfo) 
{
+super(children);
+this.tenantId =
+functionInfo.getTenantId() == null ? PName.EMPTY_NAME : 
functionInfo.getTenantId();
+this.functionClassName = functionInfo.getClassName();
+this.jarPath = functionInfo.getJarPath();
+constructUDFFunction();
+}
+
+@Override
+public boolean evaluate(Tuple tuple, ImmutableBytesWritable ptr) {
+return udfFunction.evaluate(tuple, ptr);
+}
+
+@Override
+public  T accept(ExpressionVisitor visitor) {
+return udfFunction.accept(visitor);
+}
+
+@Override
+public PDataType getDataType() {
+return udfFunction.getDataType();
+}
+
+@Override
+public String getName() {
+return udfFunction.getName();
+}
+
+@Override
+public OrderPreserving preservesOrder() {
+return udfFunction.preservesOrder();
+}
+
+@Override
+public KeyPart newKeyPart(KeyPart childPart) {
+return udfFunction.

[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-25 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/77#discussion_r29098680
  
--- Diff: 
phoenix-core/src/test/java/org/apache/phoenix/parse/QueryParserTest.java ---
@@ -289,24 +289,6 @@ public void testNegativeCountStar() throws Exception {
 }
 
 @Test
-public void testUnknownFunction() throws Exception {
--- End diff --

Now the exception will be thrown during compilation not while parsing 
because while parsing not able check whether udf enabled or not. That's why 
removed it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-25 Thread chrajeshbabu
Github user chrajeshbabu commented on the pull request:

https://github.com/apache/phoenix/pull/77#issuecomment-96145601
  
Thanks Samarth for reviews. Will update patch addressing the comments.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-25 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/77#discussion_r29098873
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/UDFExpression.java
 ---
@@ -0,0 +1,217 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.expression.function;
+
+import static org.apache.phoenix.query.QueryServices.DYNAMIC_JARS_DIR_KEY;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.util.List;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.locks.Lock;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.DynamicClassLoader;
+import org.apache.hadoop.hbase.util.KeyLocker;
+import org.apache.hadoop.io.WritableUtils;
+import org.apache.phoenix.compile.KeyPart;
+import org.apache.phoenix.expression.Expression;
+import org.apache.phoenix.expression.visitor.ExpressionVisitor;
+import org.apache.phoenix.parse.PFunction;
+import org.apache.phoenix.schema.PName;
+import org.apache.phoenix.schema.PNameFactory;
+import org.apache.phoenix.schema.tuple.Tuple;
+import org.apache.phoenix.schema.types.PDataType;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.collect.MapMaker;
+
+public class UDFExpression extends ScalarFunction {
--- End diff --

if (expression.getDeterminism() != Determinism.ALWAYS) {
throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.NON_DETERMINISTIC_EXPRESSION_NOT_ALLOWED_IN_INDEX).build().buildException();
}

Here it's the reason for failure James...getting problem with functional 
indexes only.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-25 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/77#discussion_r29098894
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/UDFExpression.java
 ---
@@ -0,0 +1,217 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.expression.function;
+
+import static org.apache.phoenix.query.QueryServices.DYNAMIC_JARS_DIR_KEY;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.util.List;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.locks.Lock;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.DynamicClassLoader;
+import org.apache.hadoop.hbase.util.KeyLocker;
+import org.apache.hadoop.io.WritableUtils;
+import org.apache.phoenix.compile.KeyPart;
+import org.apache.phoenix.expression.Expression;
+import org.apache.phoenix.expression.visitor.ExpressionVisitor;
+import org.apache.phoenix.parse.PFunction;
+import org.apache.phoenix.schema.PName;
+import org.apache.phoenix.schema.PNameFactory;
+import org.apache.phoenix.schema.tuple.Tuple;
+import org.apache.phoenix.schema.types.PDataType;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.collect.MapMaker;
+
+public class UDFExpression extends ScalarFunction {
--- End diff --

java.sql.SQLException: ERROR 521 (42898): Non-deterministic expression not 
allowed in an index
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:386)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at 
org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1179)
at 
org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:95)
at 
org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:303)
at 
org.apache.phoenix.jdbc.PhoenixStatement$3.call(PhoenixStatement.java:1)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:294)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1247)
at 
org.apache.phoenix.end2end.UserDefinedFunctionsIT.testFunctionalIndexesWithUDFFunction(UserDefinedFunctionsIT.java:509)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-25 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/77#discussion_r29103983
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/UDFExpression.java
 ---
@@ -0,0 +1,217 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.expression.function;
+
+import static org.apache.phoenix.query.QueryServices.DYNAMIC_JARS_DIR_KEY;
+
+import java.io.DataInput;
+import java.io.DataOutput;
+import java.io.IOException;
+import java.lang.reflect.Constructor;
+import java.lang.reflect.InvocationTargetException;
+import java.util.List;
+import java.util.concurrent.ConcurrentMap;
+import java.util.concurrent.locks.Lock;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HBaseConfiguration;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.DynamicClassLoader;
+import org.apache.hadoop.hbase.util.KeyLocker;
+import org.apache.hadoop.io.WritableUtils;
+import org.apache.phoenix.compile.KeyPart;
+import org.apache.phoenix.expression.Expression;
+import org.apache.phoenix.expression.visitor.ExpressionVisitor;
+import org.apache.phoenix.parse.PFunction;
+import org.apache.phoenix.schema.PName;
+import org.apache.phoenix.schema.PNameFactory;
+import org.apache.phoenix.schema.tuple.Tuple;
+import org.apache.phoenix.schema.types.PDataType;
+
+import com.google.common.annotations.VisibleForTesting;
+import com.google.common.collect.MapMaker;
+
+public class UDFExpression extends ScalarFunction {
--- End diff --

Added this James. Thanks for alternative soln.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-25 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/77#discussion_r29103995
  
--- Diff: phoenix-core/src/main/antlr3/PhoenixSQL.g ---
@@ -114,6 +114,15 @@ tokens
 ASYNC='async';
 SAMPLING='sampling';
 UNION='union';
+FUNCTION='function';
+AS='as';
+REPLACE='replace';
--- End diff --

Removed replace currently.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-25 Thread chrajeshbabu
Github user chrajeshbabu commented on the pull request:

https://github.com/apache/phoenix/pull/77#issuecomment-96276745
  
Thanks @JamesRTaylor  @samarthjain 
I have addressed the review comments and added to pull request. If it's ok 
I will commit this tomorrow morning IST and work on subtasks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-28 Thread chrajeshbabu
Github user chrajeshbabu commented on the pull request:

https://github.com/apache/phoenix/pull/77#issuecomment-96969719
  
It's committed. Hence closing.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-538 Support UDFs

2015-04-28 Thread chrajeshbabu
Github user chrajeshbabu closed the pull request at:

https://github.com/apache/phoenix/pull/77


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2015-04-28 Thread chrajeshbabu
Github user chrajeshbabu closed the pull request at:

https://github.com/apache/phoenix/pull/3


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-03-29 Thread chrajeshbabu
GitHub user chrajeshbabu opened a pull request:

https://github.com/apache/phoenix/pull/156

PHOENIX-2628 Ensure split when iterating through results handled corr…

The patch fixes issues with splits and merges while scanning local indexes. 
 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chrajeshbabu/phoenix master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/156.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #156


commit d2e4d166dc2b32ed725183fc4465378029bd7834
Author: Rajeshbabu Chintaguntla 
Date:   2016-03-29T16:45:02Z

PHOENIX-2628 Ensure split when iterating through results handled 
correctly(Rajeshbabu)




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-04 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58485324
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java ---
@@ -727,7 +727,7 @@ public void testLocalIndexScanAfterRegionSplit() throws 
Exception {
 assertTrue(rs.next());
 
 HBaseAdmin admin = driver.getConnectionQueryServices(getUrl(), 
TestUtil.TEST_PROPERTIES).getAdmin();
-for (int i = 1; i < 5; i++) {
+for (int i = 1; i < 2; i++) {
--- End diff --

While debugging a failure changed it to 2. 5 is ok will keep it as it is.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-04 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58485341
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java 
---
@@ -86,8 +98,8 @@ public static void doSetup() throws Exception {

@Parameters(name="localIndex = {0} , transactional = {1}")
 public static Collection data() {
-return Arrays.asList(new Boolean[][] { 
- { false, false }, { false, true }, { true, false }, { 
true, true }
--- End diff --

Yes James it's space only change. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-04 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58485360
  
--- Diff: 
phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
 ---
@@ -157,6 +162,7 @@ public Reader 
preStoreFileReaderOpen(ObserverContext

[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-04 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58485551
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java ---
@@ -385,9 +387,25 @@ public Scan intersectScan(Scan scan, final byte[] 
originalStartKey, final byte[]
 if (scanStopKey.length > 0 && Bytes.compareTo(scanStartKey, 
scanStopKey) >= 0) { 
 return null; 
 }
-newScan.setAttribute(SCAN_ACTUAL_START_ROW, scanStartKey);
-newScan.setStartRow(scanStartKey);
-newScan.setStopRow(scanStopKey);
+if(ScanUtil.isLocalIndex(scan)) {
--- End diff --

Sure will try to remove the local index scan check.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58632305
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java ---
@@ -385,9 +387,25 @@ public Scan intersectScan(Scan scan, final byte[] 
originalStartKey, final byte[]
 if (scanStopKey.length > 0 && Bytes.compareTo(scanStartKey, 
scanStopKey) >= 0) { 
 return null; 
 }
-newScan.setAttribute(SCAN_ACTUAL_START_ROW, scanStartKey);
-newScan.setStartRow(scanStartKey);
-newScan.setStopRow(scanStopKey);
+if(ScanUtil.isLocalIndex(scan)) {
--- End diff --

@JamesRTaylor  we cannot use keyoffset > 0 check always for considering 
local indexes scan because in special case like if a table has only one region 
then region start key length and end key length is zero so keyoffset also 
becoming zero. In that case we need to check local index scan or not to set the 
attributes properly. Wdyt? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58637933
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/TableResultIterator.java 
---
@@ -107,8 +127,37 @@ public synchronized void close() throws SQLException {
 @Override
 public synchronized Tuple next() throws SQLException {
 initScanner();
-Tuple t = scanIterator.next();
-return t;
+try {
+lastTuple = scanIterator.next();
+if (lastTuple != null) {
+ImmutableBytesWritable ptr = new ImmutableBytesWritable();
+lastTuple.getKey(ptr);
+}
+} catch (SQLException e) {
+try {
+throw ServerUtil.parseServerException(e);
+} catch(StaleRegionBoundaryCacheException e1) {
+if(scan.getAttribute(NON_AGGREGATE_QUERY)!=null) {
+Scan newScan = ScanUtil.newScan(scan);
+if(lastTuple != null) {
+lastTuple.getKey(ptr);
+byte[] startRowSuffix = 
ByteUtil.copyKeyBytesIfNecessary(ptr);
+if(ScanUtil.isLocalIndex(newScan)) {
+newScan.setAttribute(SCAN_START_ROW_SUFFIX, 
ByteUtil.nextKey(startRowSuffix));
+} else {
+
newScan.setStartRow(ByteUtil.nextKey(startRowSuffix));
+}
+}
+
plan.getContext().getConnection().getQueryServices().clearTableRegionCache(htable.getTableName());
+this.scanIterator =
+
plan.iterator(DefaultParallelScanGrouper.getInstance(), newScan);
--- End diff --

Yes aggregate queries are already handled properly. This code is only for 
handling splits when we are in the middle of non aggregate queries. If there 
are splits in the starting of the query then we are throwing out the stale 
region exception to BaseResultIterators which handles creating the proper 
parallel scans.
+if(scan.getAttribute(NON_AGGREGATE_QUERY)!=null) {



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58638568
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
 ---
@@ -423,7 +426,14 @@ private RegionScanner 
scanUnordered(ObserverContext

[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58638946
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java ---
@@ -385,9 +387,25 @@ public Scan intersectScan(Scan scan, final byte[] 
originalStartKey, final byte[]
 if (scanStopKey.length > 0 && Bytes.compareTo(scanStartKey, 
scanStopKey) >= 0) { 
 return null; 
 }
-newScan.setAttribute(SCAN_ACTUAL_START_ROW, scanStartKey);
-newScan.setStartRow(scanStartKey);
-newScan.setStopRow(scanStopKey);
+if(ScanUtil.isLocalIndex(scan)) {
--- End diff --

I will try moving outside of this method.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58638911
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
 ---
@@ -402,8 +405,8 @@ private RegionScanner 
scanUnordered(ObserverContext

[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58639358
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/TableResultIterator.java 
---
@@ -107,8 +127,37 @@ public synchronized void close() throws SQLException {
 @Override
 public synchronized Tuple next() throws SQLException {
 initScanner();
-Tuple t = scanIterator.next();
-return t;
+try {
+lastTuple = scanIterator.next();
+if (lastTuple != null) {
+ImmutableBytesWritable ptr = new ImmutableBytesWritable();
+lastTuple.getKey(ptr);
+}
+} catch (SQLException e) {
+try {
+throw ServerUtil.parseServerException(e);
+} catch(StaleRegionBoundaryCacheException e1) {
+if(scan.getAttribute(NON_AGGREGATE_QUERY)!=null) {
+Scan newScan = ScanUtil.newScan(scan);
+if(lastTuple != null) {
+lastTuple.getKey(ptr);
+byte[] startRowSuffix = 
ByteUtil.copyKeyBytesIfNecessary(ptr);
+if(ScanUtil.isLocalIndex(newScan)) {
+newScan.setAttribute(SCAN_START_ROW_SUFFIX, 
ByteUtil.nextKey(startRowSuffix));
+} else {
+
newScan.setStartRow(ByteUtil.nextKey(startRowSuffix));
+}
+}
+
plan.getContext().getConnection().getQueryServices().clearTableRegionCache(htable.getTableName());
+this.scanIterator =
+
plan.iterator(DefaultParallelScanGrouper.getInstance(), newScan);
--- End diff --

You mean we can check the in region boundary again in post scanner open and 
if it's out of range throw stale region boundary exception which will be 
handled by BaseResultIterators?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58639845
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/TableResultIterator.java 
---
@@ -80,13 +94,19 @@
 };
 
 
-public TableResultIterator(MutationState mutationState, TableRef 
tableRef, Scan scan, CombinableMetric scanMetrics, long renewLeaseThreshold) 
throws SQLException {
+public TableResultIterator(MutationState mutationState, Scan scan, 
CombinableMetric scanMetrics, long renewLeaseThreshold, QueryPlan plan) throws 
SQLException {
+this(mutationState, scan, scanMetrics, renewLeaseThreshold, plan, 
false);
+}
+
+public TableResultIterator(MutationState mutationState, Scan scan, 
CombinableMetric scanMetrics, long renewLeaseThreshold, QueryPlan plan, boolean 
handleSplitRegionBoundaryFailureDuringInitialization) throws SQLException {
 this.scan = scan;
 this.scanMetrics = scanMetrics;
-PTable table = tableRef.getTable();
+PTable table = plan.getTableRef().getTable();
 htable = mutationState.getHTable(table);
 this.scanIterator = UNINITIALIZED_SCANNER;
 this.renewLeaseThreshold = renewLeaseThreshold;
+this.plan = plan;
+this.handleSplitRegionBoundaryFailureDuringInitialization = 
handleSplitRegionBoundaryFailureDuringInitialization;
--- End diff --

handleSplitRegionBoundaryFailureDuringInitialization will be true for 
scanning intermediate chunks in ChunkedResultIterator. When we create scanner 
for new chunk we might see stale region boundaries in that case also we need to 
forcibly recreate the iterator with new boundaries. In normal case if we get 
StaleRegionBoundary exception while creating scanner BaseResultIterator handles 
it. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58639956
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/TableResultIterator.java 
---
@@ -107,8 +127,37 @@ public synchronized void close() throws SQLException {
 @Override
 public synchronized Tuple next() throws SQLException {
 initScanner();
-Tuple t = scanIterator.next();
-return t;
+try {
+lastTuple = scanIterator.next();
+if (lastTuple != null) {
+ImmutableBytesWritable ptr = new ImmutableBytesWritable();
+lastTuple.getKey(ptr);
+}
+} catch (SQLException e) {
+try {
+throw ServerUtil.parseServerException(e);
+} catch(StaleRegionBoundaryCacheException e1) {
+if(scan.getAttribute(NON_AGGREGATE_QUERY)!=null) {
+Scan newScan = ScanUtil.newScan(scan);
+if(lastTuple != null) {
+lastTuple.getKey(ptr);
+byte[] startRowSuffix = 
ByteUtil.copyKeyBytesIfNecessary(ptr);
+if(ScanUtil.isLocalIndex(newScan)) {
+newScan.setAttribute(SCAN_START_ROW_SUFFIX, 
ByteUtil.nextKey(startRowSuffix));
+} else {
+
newScan.setStartRow(ByteUtil.nextKey(startRowSuffix));
--- End diff --

Will raise improvement action for this James.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58639974
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/TableResultIterator.java 
---
@@ -107,8 +127,37 @@ public synchronized void close() throws SQLException {
 @Override
 public synchronized Tuple next() throws SQLException {
 initScanner();
-Tuple t = scanIterator.next();
-return t;
+try {
+lastTuple = scanIterator.next();
+if (lastTuple != null) {
+ImmutableBytesWritable ptr = new ImmutableBytesWritable();
+lastTuple.getKey(ptr);
+}
+} catch (SQLException e) {
+try {
+throw ServerUtil.parseServerException(e);
+} catch(StaleRegionBoundaryCacheException e1) {
+if(scan.getAttribute(NON_AGGREGATE_QUERY)!=null) {
--- End diff --

Will move the check to ScanUtil.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58639989
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/TableResultIterator.java 
---
@@ -107,8 +127,37 @@ public synchronized void close() throws SQLException {
 @Override
 public synchronized Tuple next() throws SQLException {
 initScanner();
-Tuple t = scanIterator.next();
-return t;
+try {
+lastTuple = scanIterator.next();
+if (lastTuple != null) {
+ImmutableBytesWritable ptr = new ImmutableBytesWritable();
+lastTuple.getKey(ptr);
+}
+} catch (SQLException e) {
+try {
+throw ServerUtil.parseServerException(e);
+} catch(StaleRegionBoundaryCacheException e1) {
--- End diff --

I will add.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58640276
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/TableResultIterator.java 
---
@@ -121,8 +170,21 @@ public synchronized void initScanner() throws 
SQLException {
 this.scanIterator =
 new 
ScanningResultIterator(htable.getScanner(scan), scanMetrics);
 } catch (IOException e) {
-Closeables.closeQuietly(htable);
-throw ServerUtil.parseServerException(e);
+if(handleSplitRegionBoundaryFailureDuringInitialization) {
--- End diff --

This is required for ChunkedResultIterator if we are going to deprecate or 
not use it we don't need these changes. Other than that we might need to deal 
with it in PhoenixRecordReader(currently we skipping range check but for local 
indexes it's compulsory to handle otherwise we might miss one of the daughter 
records)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58640455
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java 
---
@@ -556,35 +564,55 @@ private static String toString(List gps) {
 } else {
 endKey = regionBoundaries.get(regionIndex);
 }
-HRegionLocation regionLocation = 
regionLocations.get(regionIndex);
-if (isLocalIndex) {
-HRegionInfo regionInfo = 
regionLocation.getRegionInfo();
-endRegionKey = regionInfo.getEndKey();
-keyOffset = 
ScanUtil.getRowKeyOffset(regionInfo.getStartKey(), endRegionKey);
-}
-try {
-while (guideIndex < gpsSize && 
(currentGuidePost.compareTo(endKey) <= 0 || endKey.length == 0)) {
-Scan newScan = scanRanges.intersectScan(scan, 
currentKeyBytes, currentGuidePostBytes, keyOffset,
-false);
-estimatedRows += 
gps.getRowCounts().get(guideIndex);
-estimatedSize += 
gps.getByteCounts().get(guideIndex);
-scans = addNewScan(parallelScans, scans, newScan, 
currentGuidePostBytes, false, regionLocation);
-currentKeyBytes = currentGuidePost.copyBytes();
-currentGuidePost = PrefixByteCodec.decode(decoder, 
input);
-currentGuidePostBytes = 
currentGuidePost.copyBytes();
-guideIndex++;
-}
-} catch (EOFException e) {}
-Scan newScan = scanRanges.intersectScan(scan, 
currentKeyBytes, endKey, keyOffset, true);
-if (isLocalIndex) {
-if (newScan != null) {
-newScan.setAttribute(EXPECTED_UPPER_REGION_KEY, 
endRegionKey);
-} else if (!scans.isEmpty()) {
-
scans.get(scans.size()-1).setAttribute(EXPECTED_UPPER_REGION_KEY, endRegionKey);
-}
-}
-scans = addNewScan(parallelScans, scans, newScan, endKey, 
true, regionLocation);
-currentKeyBytes = endKey;
+if (Bytes.compareTo(scan.getStartRow(), 
context.getScan().getStartRow()) != 0
+ || Bytes.compareTo(scan.getStopRow(), 
context.getScan().getStopRow()) != 0) {
+Scan newScan = ScanUtil.newScan(scan);
+if(ScanUtil.isLocalIndex(scan)) {
+newScan.setStartRow(regionInfo.getStartKey());
+newScan.setAttribute(SCAN_ACTUAL_START_ROW, 
regionInfo.getStartKey());
--- End diff --

First one is actual start key of the scan second one is end key of region. 
After this patch we don't need EXPECTED_UPPER_REGION_KEY but kept it for 
compatibility.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58640818
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java 
---
@@ -556,35 +564,55 @@ private static String toString(List gps) {
 } else {
 endKey = regionBoundaries.get(regionIndex);
 }
-HRegionLocation regionLocation = 
regionLocations.get(regionIndex);
-if (isLocalIndex) {
-HRegionInfo regionInfo = 
regionLocation.getRegionInfo();
-endRegionKey = regionInfo.getEndKey();
-keyOffset = 
ScanUtil.getRowKeyOffset(regionInfo.getStartKey(), endRegionKey);
-}
-try {
-while (guideIndex < gpsSize && 
(currentGuidePost.compareTo(endKey) <= 0 || endKey.length == 0)) {
-Scan newScan = scanRanges.intersectScan(scan, 
currentKeyBytes, currentGuidePostBytes, keyOffset,
-false);
-estimatedRows += 
gps.getRowCounts().get(guideIndex);
-estimatedSize += 
gps.getByteCounts().get(guideIndex);
-scans = addNewScan(parallelScans, scans, newScan, 
currentGuidePostBytes, false, regionLocation);
-currentKeyBytes = currentGuidePost.copyBytes();
-currentGuidePost = PrefixByteCodec.decode(decoder, 
input);
-currentGuidePostBytes = 
currentGuidePost.copyBytes();
-guideIndex++;
-}
-} catch (EOFException e) {}
-Scan newScan = scanRanges.intersectScan(scan, 
currentKeyBytes, endKey, keyOffset, true);
-if (isLocalIndex) {
-if (newScan != null) {
-newScan.setAttribute(EXPECTED_UPPER_REGION_KEY, 
endRegionKey);
-} else if (!scans.isEmpty()) {
-
scans.get(scans.size()-1).setAttribute(EXPECTED_UPPER_REGION_KEY, endRegionKey);
-}
-}
-scans = addNewScan(parallelScans, scans, newScan, endKey, 
true, regionLocation);
-currentKeyBytes = endKey;
+if (Bytes.compareTo(scan.getStartRow(), 
context.getScan().getStartRow()) != 0
+ || Bytes.compareTo(scan.getStopRow(), 
context.getScan().getStopRow()) != 0) {
--- End diff --

If I move creating parallel scans for this special case I need to duplicate 
lot of code. That's why I have added special case as part of creating existing 
parallel scans only.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58640990
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java 
---
@@ -465,7 +465,14 @@ private static String toString(List gps) {
 }
 
 private List> getParallelScans() throws SQLException {
-return getParallelScans(EMPTY_BYTE_ARRAY, EMPTY_BYTE_ARRAY);
+if (scan == null
+|| (ScanUtil.isLocalIndex(scan)
+&& 
Bytes.compareTo(context.getScan().getStartRow(), scan.getStartRow()) == 0 && 
Bytes
+.compareTo(context.getScan().getStopRow(), 
scan.getStopRow()) == 0)) {
+return getParallelScans(EMPTY_BYTE_ARRAY, EMPTY_BYTE_ARRAY);
--- End diff --

This check detects whether scan boundaries  are equal to context scan 
boundaries or not. If they are same we are going by getting all parallel scans 
for the table. Will document it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58641009
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java 
---
@@ -465,7 +465,14 @@ private static String toString(List gps) {
 }
 
 private List> getParallelScans() throws SQLException {
-return getParallelScans(EMPTY_BYTE_ARRAY, EMPTY_BYTE_ARRAY);
+if (scan == null
+|| (ScanUtil.isLocalIndex(scan)
+&& 
Bytes.compareTo(context.getScan().getStartRow(), scan.getStartRow()) == 0 && 
Bytes
+.compareTo(context.getScan().getStopRow(), 
scan.getStopRow()) == 0)) {
--- End diff --

Sure will move to ScanUtil


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58641031
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java 
---
@@ -326,11 +325,12 @@ private static void 
optimizeProjection(StatementContext context, Scan scan, PTab
 }
 }
 
-public BaseResultIterators(QueryPlan plan, Integer perScanLimit, 
ParallelScanGrouper scanGrouper) throws SQLException {
+public BaseResultIterators(QueryPlan plan, Integer perScanLimit, 
ParallelScanGrouper scanGrouper, Scan scan) throws SQLException {
 super(plan.getContext(), plan.getTableRef(), plan.getGroupBy(), 
plan.getOrderBy(), plan.getStatement().getHint(), plan.getLimit());
 this.plan = plan;
 this.scanGrouper = scanGrouper;
 StatementContext context = plan.getContext();
+this.scan = scan == null ? context.getScan() : scan;
--- End diff --

This null check is not required will remove it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58642188
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/TableResultIterator.java 
---
@@ -107,8 +127,37 @@ public synchronized void close() throws SQLException {
 @Override
 public synchronized Tuple next() throws SQLException {
 initScanner();
-Tuple t = scanIterator.next();
-return t;
+try {
+lastTuple = scanIterator.next();
+if (lastTuple != null) {
+ImmutableBytesWritable ptr = new ImmutableBytesWritable();
+lastTuple.getKey(ptr);
+}
+} catch (SQLException e) {
+try {
+throw ServerUtil.parseServerException(e);
+} catch(StaleRegionBoundaryCacheException e1) {
+if(scan.getAttribute(NON_AGGREGATE_QUERY)!=null) {
+Scan newScan = ScanUtil.newScan(scan);
+if(lastTuple != null) {
+lastTuple.getKey(ptr);
+byte[] startRowSuffix = 
ByteUtil.copyKeyBytesIfNecessary(ptr);
+if(ScanUtil.isLocalIndex(newScan)) {
+newScan.setAttribute(SCAN_START_ROW_SUFFIX, 
ByteUtil.nextKey(startRowSuffix));
+} else {
+
newScan.setStartRow(ByteUtil.nextKey(startRowSuffix));
+}
+}
+
plan.getContext().getConnection().getQueryServices().clearTableRegionCache(htable.getTableName());
+this.scanIterator =
+
plan.iterator(DefaultParallelScanGrouper.getInstance(), newScan);
--- End diff --

Agree with you James. I think I can raise another issue for this and work 
on it. Wdyt?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-05 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r58642550
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java 
---
@@ -556,35 +564,55 @@ private static String toString(List gps) {
 } else {
 endKey = regionBoundaries.get(regionIndex);
 }
-HRegionLocation regionLocation = 
regionLocations.get(regionIndex);
-if (isLocalIndex) {
-HRegionInfo regionInfo = 
regionLocation.getRegionInfo();
-endRegionKey = regionInfo.getEndKey();
-keyOffset = 
ScanUtil.getRowKeyOffset(regionInfo.getStartKey(), endRegionKey);
-}
-try {
-while (guideIndex < gpsSize && 
(currentGuidePost.compareTo(endKey) <= 0 || endKey.length == 0)) {
-Scan newScan = scanRanges.intersectScan(scan, 
currentKeyBytes, currentGuidePostBytes, keyOffset,
-false);
-estimatedRows += 
gps.getRowCounts().get(guideIndex);
-estimatedSize += 
gps.getByteCounts().get(guideIndex);
-scans = addNewScan(parallelScans, scans, newScan, 
currentGuidePostBytes, false, regionLocation);
-currentKeyBytes = currentGuidePost.copyBytes();
-currentGuidePost = PrefixByteCodec.decode(decoder, 
input);
-currentGuidePostBytes = 
currentGuidePost.copyBytes();
-guideIndex++;
-}
-} catch (EOFException e) {}
-Scan newScan = scanRanges.intersectScan(scan, 
currentKeyBytes, endKey, keyOffset, true);
-if (isLocalIndex) {
-if (newScan != null) {
-newScan.setAttribute(EXPECTED_UPPER_REGION_KEY, 
endRegionKey);
-} else if (!scans.isEmpty()) {
-
scans.get(scans.size()-1).setAttribute(EXPECTED_UPPER_REGION_KEY, endRegionKey);
-}
-}
-scans = addNewScan(parallelScans, scans, newScan, endKey, 
true, regionLocation);
-currentKeyBytes = endKey;
+if (Bytes.compareTo(scan.getStartRow(), 
context.getScan().getStartRow()) != 0
+ || Bytes.compareTo(scan.getStopRow(), 
context.getScan().getStopRow()) != 0) {
--- End diff --

getParalleScans is the entry point. Let me move it out.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-08 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r59077319
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java
 ---
@@ -279,6 +301,31 @@ protected RegionScanner getWrappedScanner(final 
ObserverContext

[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-08 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r59077569
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java
 ---
@@ -337,6 +384,22 @@ public boolean nextRaw(List result) throws 
IOException {
 arrayElementCell = 
result.get(arrayElementCellPosition);
 }
 if (ScanUtil.isLocalIndex(scan) && 
!ScanUtil.isAnalyzeTable(scan)) {
--- End diff --

We are getting the results after seek only. Will check once again whether 
we can handle this in local index scanner.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-08 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r59077866
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
 ---
@@ -402,8 +405,8 @@ private RegionScanner 
scanUnordered(ObserverContext

[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-08 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/156#discussion_r59078097
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/ChunkedResultIterator.java
 ---
@@ -56,6 +57,7 @@
 private final MutationState mutationState;
 private Scan scan;
 private PeekingResultIterator resultIterator;
+private QueryPlan plan;
 
 public static class ChunkedResultIteratorFactory implements 
ParallelIteratorFactory {
--- End diff --

Currently removed the changes required for ChunkedResultIterator. The 
changes in the chunked result iterator are just removing code not required.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-08 Thread chrajeshbabu
Github user chrajeshbabu commented on the pull request:

https://github.com/apache/phoenix/pull/156#issuecomment-207582047
  
@JamesRTaylor  Thanks for the review. Committed the changes handling the 
review comments.
Refactored the code and added code comments where ever possible. 

Now not handling the special cases required for ChunkedResultItertor. 

bq. Another potential, different approach would be for Phoenix to 
universally handle the split during scan case (rather than letting the HBase 
client scanner handle it for non aggregate case and Phoenix handle it for the 
aggregate case). Would that simplify things?
Now throwing stale region bound exception all the cases when ever we get 
NSRE then phoenix client can handle the NSRE than hbase client handling with 
wrong region boundaries.

The ordered aggregate queries issue you are mentioning can be handled as 
part of other issue.

Please review the latest code.
Thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2628 Ensure split when iterating thr...

2016-04-10 Thread chrajeshbabu
Github user chrajeshbabu commented on the pull request:

https://github.com/apache/phoenix/pull/156#issuecomment-208184999
  
Thanks @JamesRTaylor  for review. Will rebase the patch and commit.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements(Rajesh...

2016-05-10 Thread chrajeshbabu
GitHub user chrajeshbabu opened a pull request:

https://github.com/apache/phoenix/pull/168

PHOENIX-1734 Local index improvements(Rajeshbabu)

This is the patch for new implementation of local index where we store 
local index data in the separate column families in the same table than 
different table. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chrajeshbabu/phoenix master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/168.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #168


commit 2dbb361551b48b96d8335c12b397a2258c266fd3
Author: Rajeshbabu Chintaguntla 
Date:   2016-05-10T14:00:41Z

PHOENIX-1734 Local index improvements(Rajeshbabu)




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements(Rajesh...

2016-05-16 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/168#discussion_r63463302
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/SortMergeJoinIT.java ---
@@ -187,13 +186,13 @@ public void initTable() throws Exception {
 "CREATE LOCAL INDEX \"idx_supplier\" ON " + 
JOIN_SUPPLIER_TABLE_FULL_NAME + " (name)"
 }, {
 "SORT-MERGE-JOIN (LEFT) TABLES\n" +
-"CLIENT PARALLEL 1-WAY RANGE SCAN OVER " + 
MetaDataUtil.LOCAL_INDEX_TABLE_PREFIX + JOIN_SUPPLIER_TABLE_DISPLAY_NAME + " 
[-32768]\n" +
+"CLIENT PARALLEL 1-WAY RANGE SCAN OVER " 
+JOIN_SUPPLIER_TABLE_DISPLAY_NAME + " [1]\n" +
--- End diff --

@JamesRTaylor  Yes agree.  Is this fine "OVER LOCAL INDEX OF DATA_TABLE"? 
or can we include index table name as well?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements(Rajesh...

2016-05-16 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/168#discussion_r63463350
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 ---
@@ -202,7 +207,10 @@ protected RegionScanner doPostScannerOpen(final 
ObserverContext

[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements(Rajesh...

2016-05-16 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/168#discussion_r63463482
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java ---
@@ -861,7 +871,12 @@ public Put buildUpdateMutation(KeyValueBuilder 
kvBuilder, ValueGetter valueGette
 put.setDurability(!indexWALDisabled ? 
Durability.USE_DEFAULT : Durability.SKIP_WAL);
 }
 //this is a little bit of extra work for installations 
that are running <0.94.14, but that should be rare and is a short-term set of 
wrappers - it shouldn't kill GC
-put.add(kvBuilder.buildPut(rowKey, 
ref.getFamilyWritable(), cq, ts, value));
+if(this.isLocalIndex) {
+ColumnReference columnReference = 
this.coveredColumnsMap.get(ref);
+   put.add(kvBuilder.buildPut(rowKey, 
columnReference.getFamilyWritable(), cq, ts, value));
--- End diff --

Yes James. It's to save regenerating the column family name all the time.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements(Rajesh...

2016-05-16 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/168#discussion_r63463462
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/execute/DelegateHTable.java ---
@@ -297,4 +297,28 @@ public boolean checkAndDelete(byte[] row, byte[] 
family, byte[] qualifier,
return delegate.checkAndDelete(row, family, qualifier, 
compareOp, value, delete);
}
 
+@Override
--- End diff --

These methods required for 1.2.2-SNAPSHOT version so currently not required 
with older versions. will remove it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements(Rajesh...

2016-05-16 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/168#discussion_r63463584
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java ---
@@ -1016,7 +1016,7 @@ private MutationState buildIndexAtTimeStamp(PTable 
index, NamedTableNode dataTab
 // connection so that our new index table is visible.
 Properties props = new Properties(connection.getClientInfo());
 props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, 
Long.toString(connection.getSCN()+1));
-PhoenixConnection conn = 
DriverManager.getConnection(connection.getURL(), 
props).unwrap(PhoenixConnection.class);
+PhoenixConnection conn = new PhoenixConnection(connection, 
connection.getQueryServices(), props);
--- End diff --

DriverManager.getConnection has more overhead than initializing the 
PhoenixConnection right that's why changed. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements(Rajesh...

2016-05-16 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/168#discussion_r63463618
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/ParallelWriterIndexCommitter.java
 ---
@@ -116,7 +117,10 @@ public void write(Multimap toWrite) throws S
 // doing a complete copy over of all the index update for each 
table.
 final List mutations = 
kvBuilder.cloneIfNecessary((List)entry.getValue());
 final HTableInterfaceReference tableReference = entry.getKey();
-final RegionCoprocessorEnvironment env = this.env;
+if (env !=null && tableReference.getTableName().equals(
+env.getRegion().getTableDesc().getNameAsString())) {
+continue;
+}
--- End diff --

Yes we should do that. Currently made a patch without HBASE-15600 so once 
it's ready then will add condition check.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements(Rajesh...

2016-05-16 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/168#discussion_r63463623
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixTransactionalIndexer.java
 ---
@@ -160,6 +163,9 @@ public void 
preBatchMutate(ObserverContext c,
 
 // get the index updates for all elements in this batch
 indexUpdates = getIndexUpdates(c.getEnvironment(), 
indexMetaData, getMutationIterator(miniBatchOp), txRollbackAttribute);
+
+IndexUtil.addLocalUpdatesToCpOperations(c, miniBatchOp, 
indexUpdates,
+m.getDurability() != Durability.SKIP_WAL);
--- End diff --

Yes


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements(Rajesh...

2016-05-16 Thread chrajeshbabu
Github user chrajeshbabu commented on the pull request:

https://github.com/apache/phoenix/pull/168#issuecomment-219617896
  
James as we discussed I have made a patch working with older versions of 
HBase first and handled review comments here. Will create new pull request with 
that patch.

Thanks for reviews.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements(Rajesh...

2016-05-20 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/168#discussion_r64123637
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -2455,6 +2409,19 @@ public Void call() throws Exception {
 }
 
 if (currentServerSideTableTimeStamp < 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_8_0) {
+Properties props = 
PropertiesUtil.deepCopy(metaConnection.getClientInfo());
+
props.remove(PhoenixRuntime.CURRENT_SCN_ATTRIB);
+
props.remove(PhoenixRuntime.TENANT_ID_ATTRIB);
+PhoenixConnection conn =
+new 
PhoenixConnection(ConnectionQueryServicesImpl.this,
+
metaConnection.getURL(), props, metaConnection
+
.getMetaDataCache());
+try {
+
UpgradeUtil.upgradeLocalIndexes(conn);
--- End diff --

Agree with you James. Will do it in other patch.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements(Rajesh...

2016-05-20 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/168#discussion_r64123714
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/SortMergeJoinIT.java ---
@@ -187,13 +186,13 @@ public void initTable() throws Exception {
 "CREATE LOCAL INDEX \"idx_supplier\" ON " + 
JOIN_SUPPLIER_TABLE_FULL_NAME + " (name)"
 }, {
 "SORT-MERGE-JOIN (LEFT) TABLES\n" +
-"CLIENT PARALLEL 1-WAY RANGE SCAN OVER " + 
MetaDataUtil.LOCAL_INDEX_TABLE_PREFIX + JOIN_SUPPLIER_TABLE_DISPLAY_NAME + " 
[-32768]\n" +
+"CLIENT PARALLEL 1-WAY RANGE SCAN OVER " 
+JOIN_SUPPLIER_TABLE_DISPLAY_NAME + " [1]\n" +
--- End diff --

I will work on this in other patch James. Again I need to change all the 
test cases.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix issue #193: Improvements to Phoenix Web App

2016-08-26 Thread chrajeshbabu
Github user chrajeshbabu commented on the issue:

https://github.com/apache/phoenix/pull/193
  
@AyolaJayamaha  Currently we are trying to get the webapp related files 
form target directory in org.apache.phoenix.tracingwebapp.http.Main. Because of 
this we cannot use the runnable jar independently from the build. Can you also 
make changes in such a way that both runnable jars and webapp jars reside in 
build and pick them in the code?
{code}
URL location = domain.getCodeSource().getLocation();
String webappDirLocation = location.toString().split("target")[0] 
+"src/main/webapp";
Server server = new Server(port);
WebAppContext root = new WebAppContext();
{code}


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix issue #202: PHOENIX-3193 Tracing UI cleanup

2016-08-29 Thread chrajeshbabu
Github user chrajeshbabu commented on the issue:

https://github.com/apache/phoenix/pull/202
  
Here are couple of issues found one while starting traceserver and one 
while getting the results in UI.
Currently the eclipse jetty version used is 8.1.7.v20120910
From main pom.xml
8.1.7.v20120910 

`Exception in thread "main" java.lang.NoClassDefFoundError: 
javax/servlet/FilterRegistration
at 
org.eclipse.jetty.servlet.ServletContextHandler.(ServletContextHandler.java:134)
at 
org.eclipse.jetty.servlet.ServletContextHandler.(ServletContextHandler.java:114)
at 
org.eclipse.jetty.servlet.ServletContextHandler.(ServletContextHandler.java:102)
at 
org.eclipse.jetty.webapp.WebAppContext.(WebAppContext.java:181)
at org.apache.phoenix.tracingwebapp.http.Main.run(Main.java:72)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.phoenix.tracingwebapp.http.Main.main(Main.java:54)
Caused by: java.lang.ClassNotFoundException: 
javax.servlet.FilterRegistration
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
... 7 more
`

When I changed the jetty version to 7.6.19.v20160209 it's working fine? 
Aren't you facing it?
Once I do that again getting below exception and not able to read anything 
from trace table. 

`104933 [qtp1157440841-20] WARN org.eclipse.jetty.servlet.ServletHandler  - 
Error for /trace/
java.lang.NoClassDefFoundError: org/codehaus/jackson/map/ObjectMapper
at 
org.apache.phoenix.tracingwebapp.http.TraceServlet.getResults(TraceServlet.java:136)
at 
org.apache.phoenix.tracingwebapp.http.TraceServlet.searchTrace(TraceServlet.java:112)
at 
org.apache.phoenix.tracingwebapp.http.TraceServlet.doGet(TraceServlet.java:67)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:652)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:445)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:556)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:227)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1044)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:372)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:189)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:978)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:369)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:464)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:913)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:975)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:641)
at 
org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:231)
at 
org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
at 
org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:667)
at 
org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: 
org.codehaus.jackson.map.ObjectMapper
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLC

[GitHub] phoenix issue #202: PHOENIX-3193 Tracing UI cleanup

2016-09-08 Thread chrajeshbabu
Github user chrajeshbabu commented on the issue:

https://github.com/apache/phoenix/pull/202
  
@AyolaJayamaha  we want these improvements and cleanup to be in 4.8.1 
version. Any other things pending here? Is it ready for commit?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #211: PHOENIX-3254 IndexId Sequence is incremented even...

2016-09-28 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/211#discussion_r80916280
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 ---
@@ -1499,6 +1502,53 @@ public void createTable(RpcController controller, 
CreateTableRequest request,
 cell.getTimestamp(), 
Type.codeToType(cell.getTypeByte()), bytes);
 cells.add(viewConstantCell);
 }
+Short indexId = null;
+if (request.hasAllocateIndexId() && 
request.getAllocateIndexId()) {
+String tenantIdStr = tenantIdBytes.length == 0 ? null 
: Bytes.toString(tenantIdBytes);
+final Properties props = new Properties();
+UpgradeUtil.doNotUpgradeOnFirstConnection(props);
+try (PhoenixConnection connection = 
DriverManager.getConnection(MetaDataUtil.getJdbcUrl(env), 
props).unwrap(PhoenixConnection.class)){
+PName physicalName = parentTable.getPhysicalName();
+int nSequenceSaltBuckets = 
connection.getQueryServices().getSequenceSaltBuckets();
+SequenceKey key = 
MetaDataUtil.getViewIndexSequenceKey(tenantIdStr, physicalName,
+nSequenceSaltBuckets, 
parentTable.isNamespaceMapped() );
+// TODO Review Earlier sequence was created at 
(SCN-1/LATEST_TIMESTAMP) and incremented at the client 
max(SCN,dataTable.getTimestamp), but it seems we should
+// use always LATEST_TIMESTAMP to avoid seeing 
wrong sequence values by different connection having SCN
+// or not. 
+long sequenceTimestamp = HConstants.LATEST_TIMESTAMP;
+try {
+
connection.getQueryServices().createSequence(key.getTenantId(), 
key.getSchemaName(), key.getSequenceName(),
--- End diff --

@ankitsinghal  This patch introducing inter table rpc calls but seems it's 
needed. What do you think of moving create sequence to meta data client and 
perform sequence increment alone here? So that atleast we can reduce the rpc 
calls here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #211: PHOENIX-3254 IndexId Sequence is incremented even...

2016-09-28 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/211#discussion_r80925250
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 ---
@@ -1445,6 +1445,9 @@ public MetaDataResponse call(MetaDataService 
instance) throws IOException {
 builder.addTableMetadataMutations(mp.toByteString());
 }
 
builder.setClientVersion(VersionUtil.encodeVersion(PHOENIX_MAJOR_VERSION, 
PHOENIX_MINOR_VERSION, PHOENIX_PATCH_NUMBER));
+if (allocateIndexId) {
--- End diff --

nit: just format code here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-10-18 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r83792475
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForPartialBuildIT.java
 ---
@@ -0,0 +1,264 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.UUID;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import 
org.apache.phoenix.end2end.index.MutableIndexFailureIT.FailingRegionObserver;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
+import org.apache.phoenix.mapreduce.index.IndexTool;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.query.QueryServicesOptions;
+import org.apache.phoenix.schema.PIndexState;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.StringUtil;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameters;
+
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+
+/**
+ * Tests for the {@link IndexToolForPartialBuildIT}
+ */
+@RunWith(Parameterized.class)
+public class IndexToolForPartialBuildIT extends BaseOwnClusterIT {
+
+private final boolean localIndex;
+protected boolean isNamespaceEnabled = false;
+protected final String tableDDLOptions;
+
+public IndexToolForPartialBuildIT(boolean localIndex) {
+
+this.localIndex = localIndex;
+StringBuilder optionBuilder = new StringBuilder();
+optionBuilder.append(" SPLIT ON(1,2)");
+this.tableDDLOptions = optionBuilder.toString();
+}
+
+@BeforeClass
+public static void doSetup() throws Exception {
+Map serverProps = 
Maps.newHashMapWithExpectedSize(7);
+serverProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, 
QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
+serverProps.put("hbase.coprocessor.region.classes", 
FailingRegionObserver.class.getName());
+serverProps.put(" 
yarn.scheduler.capacity.maximum-am-resource-percent", "1.0");
+serverProps.put(HConstants.HBASE_CLIENT_RETRIES_NUMBER, "2");
+serverProps.put(HConstants.HBASE_RPC_TIMEOUT_KEY, "1");
+serverProps.put("hbase.client.pause", "5000");
+
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_BATCH_SIZE_ATTRIB, 
"1000");
+
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_INTERVAL_ATTRIB, 
"2000");
+Map clientProps = 
Maps.newHashMapWithExpectedSize(1);
+setUpTestDriver(new 
ReadOnlyProps(serverProps.entrySet().iterator()), new 
ReadOnlyProps(clientProps.entrySet().iterator()));
+}
+
+@Parameters(name="localIndex = {0}")
+  

[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-10-18 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r83792778
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForPartialBuildIT.java
 ---
@@ -0,0 +1,264 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.UUID;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.HConstants;
+import 
org.apache.phoenix.end2end.index.MutableIndexFailureIT.FailingRegionObserver;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
+import org.apache.phoenix.mapreduce.index.IndexTool;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.query.QueryServicesOptions;
+import org.apache.phoenix.schema.PIndexState;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.StringUtil;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameters;
+
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+
+/**
+ * Tests for the {@link IndexToolForPartialBuildIT}
+ */
+@RunWith(Parameterized.class)
+public class IndexToolForPartialBuildIT extends BaseOwnClusterIT {
+
+private final boolean localIndex;
+protected boolean isNamespaceEnabled = false;
+protected final String tableDDLOptions;
+
+public IndexToolForPartialBuildIT(boolean localIndex) {
+
+this.localIndex = localIndex;
+StringBuilder optionBuilder = new StringBuilder();
+optionBuilder.append(" SPLIT ON(1,2)");
+this.tableDDLOptions = optionBuilder.toString();
+}
+
+@BeforeClass
+public static void doSetup() throws Exception {
+Map serverProps = 
Maps.newHashMapWithExpectedSize(7);
+serverProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, 
QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
+serverProps.put("hbase.coprocessor.region.classes", 
FailingRegionObserver.class.getName());
+serverProps.put(" 
yarn.scheduler.capacity.maximum-am-resource-percent", "1.0");
+serverProps.put(HConstants.HBASE_CLIENT_RETRIES_NUMBER, "2");
+serverProps.put(HConstants.HBASE_RPC_TIMEOUT_KEY, "1");
+serverProps.put("hbase.client.pause", "5000");
+
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_BATCH_SIZE_ATTRIB, 
"1000");
+
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_INTERVAL_ATTRIB, 
"2000");
+Map clientProps = 
Maps.newHashMapWithExpectedSize(1);
+setUpTestDriver(new 
ReadOnlyProps(serverProps.entrySet().iterator()), new 
ReadOnlyProps(clientProps.entrySet().iterator()));
+}
+
+@Parameters(name="localIndex = {0}")
+  

[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-10-18 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r83793514
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForPartialBuildWithNamespaceEnabled.java
 ---
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Map;
+
+import org.apache.hadoop.hbase.HConstants;
+import 
org.apache.phoenix.end2end.index.MutableIndexFailureIT.FailingRegionObserver;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.query.QueryServicesOptions;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.junit.BeforeClass;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameters;
+
+import com.google.common.collect.Maps;
+
+/**
+ * Tests for the {@link IndexToolForPartialBuildWithNamespaceEnabled}
+ */
+@RunWith(Parameterized.class)
+public class IndexToolForPartialBuildWithNamespaceEnabled extends 
IndexToolForPartialBuildIT {
+
+
+public IndexToolForPartialBuildWithNamespaceEnabled(boolean 
localIndex, boolean isNamespaceEnabled) {
+super(localIndex);
+this.isNamespaceEnabled=isNamespaceEnabled;
+}
+
+@BeforeClass
+public static void doSetup() throws Exception {
+Map serverProps = 
Maps.newHashMapWithExpectedSize(7);
+serverProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, 
QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
+serverProps.put("hbase.coprocessor.region.classes", 
FailingRegionObserver.class.getName());
+serverProps.put(HConstants.HBASE_CLIENT_RETRIES_NUMBER, "2");
+serverProps.put(HConstants.HBASE_RPC_TIMEOUT_KEY, "1");
+serverProps.put("hbase.client.pause", "5000");
+
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_BATCH_SIZE_ATTRIB, 
"2000");
+
serverProps.put(QueryServices.INDEX_FAILURE_HANDLING_REBUILD_INTERVAL_ATTRIB, 
"1000");
+serverProps.put(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, 
"true");
+Map clientProps = 
Maps.newHashMapWithExpectedSize(1);
+clientProps.put(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, 
"true");
+setUpTestDriver(new 
ReadOnlyProps(serverProps.entrySet().iterator()), new 
ReadOnlyProps(clientProps.entrySet().iterator()));
+}
+
+@Parameters(name="localIndex = {0} , isNamespaceEnabled = {1}")
+public static Collection data() {
+return Arrays.asList(new Boolean[][] { 
+ { false, true},{ true, false }
--- End diff --

We can make IndexToolForPartialBuildIT itself to run with namespace enabled 
and disabled. We don't need extra IT test.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-10-18 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r83793592
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java ---
@@ -500,14 +500,12 @@ public void testOrderByWithExpression() throws 
Exception {
 stmt.execute();
 conn.commit();
 
-String query = "SELECT col1+col2, col4, a_string FROM " + 
tableName + " ORDER BY 1, 2";
--- End diff --

why this change needed?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-10-18 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r83795010
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexMetadataIT.java 
---
@@ -216,6 +219,15 @@ public void testIndexCreateDrop() throws Exception {
 assertFalse(rs.next());
 
 assertActiveIndex(conn, INDEX_DATA_SCHEMA, indexDataTable);
+
+ddl = "ALTER INDEX " + indexName + " ON " + INDEX_DATA_SCHEMA 
+ QueryConstants.NAME_SEPARATOR + indexDataTable + " REBUILD ASYNC";
+conn.createStatement().execute(ddl);
+// Verify the metadata for index is correct.
+rs = conn.getMetaData().getTables(null, 
StringUtil.escapeLike(INDEX_DATA_SCHEMA), indexName , new String[] 
{PTableType.INDEX.toString()});
+assertTrue(rs.next());
+assertEquals(indexName , rs.getString(3));
+assertEquals(PIndexState.BUILDING.toString(), 
rs.getString("INDEX_STATE"));
--- End diff --

This is making active index to rebuild can we add an assertion what is time 
from which we rebuild? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-10-18 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r83795282
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexMetadataIT.java 
---
@@ -568,4 +580,52 @@ public void testAsyncCreatedDate() throws Exception {
 assertTrue(d2.after(d1));
 assertFalse(rs.next());
 }
+
+@Test
+public void testAsyncRebuildTimestamp() throws Exception {
+long l0 = System.currentTimeMillis();
--- End diff --

can you use meaning full names for variables l0 and l1.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-10-18 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r83795678
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexMetadataIT.java 
---
@@ -568,4 +580,52 @@ public void testAsyncCreatedDate() throws Exception {
 assertTrue(d2.after(d1));
 assertFalse(rs.next());
 }
+
+@Test
+public void testAsyncRebuildTimestamp() throws Exception {
+long l0 = System.currentTimeMillis();
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+conn.setAutoCommit(false);
+String testTable = generateUniqueName();
+
+
+String ddl = "create table " + testTable  + " (k varchar primary 
key, v1 varchar, v2 varchar, v3 varchar)";
+PreparedStatement stmt = conn.prepareStatement(ddl);
+stmt.execute();
+String indexName = "R_ASYNCIND_" + generateUniqueName();
+
+ddl = "CREATE INDEX " + indexName + "1 ON " + testTable  + " (v1) 
";
+stmt = conn.prepareStatement(ddl);
--- End diff --

You can use createStatement than prepareStatement here.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-10-18 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r83824633
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/IndexTool.java ---
@@ -85,7 +102,7 @@
 private static final Option DATA_TABLE_OPTION = new Option("dt", 
"data-table", true,
 "Data table name (mandatory)");
 private static final Option INDEX_TABLE_OPTION = new Option("it", 
"index-table", true,
-"Index table name(mandatory)");
+"Index table name(not required in case of partial 
rebuilding)");
--- End diff --

Better to have an argument for partial rebuilding than go by rebuild if we 
don't mention index table.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-10-18 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r83827393
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/IndexTool.java ---
@@ -167,50 +180,152 @@ private void printHelpAndExit(Options options, int 
exitCode) {
 formatter.printHelp("help", options);
 System.exit(exitCode);
 }
+
+class JobFactory {
+Connection connection;
+Configuration configuration;
+private Path outputPath;
 
-@Override
-public int run(String[] args) throws Exception {
-Connection connection = null;
-try {
-CommandLine cmdLine = null;
-try {
-cmdLine = parseOptions(args);
-} catch (IllegalStateException e) {
-printHelpAndExit(e.getMessage(), getOptions());
+public JobFactory(Connection connection, Configuration 
configuration, Path outputPath) {
+this.connection = connection;
+this.configuration = configuration;
+this.outputPath = outputPath;
+
+}
+
+public Job getJob(String schemaName, String indexTable, String 
dataTable, boolean useDirectApi) throws Exception {
+if (indexTable == null) {
+return configureJobForPartialBuild(schemaName, dataTable);
+} else {
+return configureJobForAysncIndex(schemaName, indexTable, 
dataTable, useDirectApi);
 }
-final Configuration configuration = 
HBaseConfiguration.addHbaseResources(getConf());
-final String schemaName = 
cmdLine.getOptionValue(SCHEMA_NAME_OPTION.getOpt());
-final String dataTable = 
cmdLine.getOptionValue(DATA_TABLE_OPTION.getOpt());
-final String indexTable = 
cmdLine.getOptionValue(INDEX_TABLE_OPTION.getOpt());
+}
+
+private Job configureJobForPartialBuild(String schemaName, String 
dataTable) throws Exception {
 final String qDataTable = 
SchemaUtil.getQualifiedTableName(schemaName, dataTable);
-final String qIndexTable = 
SchemaUtil.getQualifiedTableName(schemaName, indexTable);
-
+final PTable pdataTable = PhoenixRuntime.getTable(connection, 
qDataTable);
 connection = ConnectionUtil.getInputConnection(configuration);
-if (!isValidIndexTable(connection, qDataTable, indexTable)) {
-throw new IllegalArgumentException(String.format(
-" %s is not an index table for %s ", qIndexTable, 
qDataTable));
+long minDisableTimestamp = HConstants.LATEST_TIMESTAMP;
+PTable indexWithMinDisableTimestamp = null;
+
+//Get Indexes in building state, minDisabledTimestamp 
+List disableIndexes = new ArrayList();
+List disabledPIndexes = new ArrayList();
+for (PTable index : pdataTable.getIndexes()) {
+if (index.getIndexState().equals(PIndexState.BUILDING)) {
+disableIndexes.add(index.getTableName().getString());
+disabledPIndexes.add(index);
+if (minDisableTimestamp > 
index.getIndexDisableTimestamp()) {
+minDisableTimestamp = 
index.getIndexDisableTimestamp();
+indexWithMinDisableTimestamp = index;
+}
+}
+}
+
+if (indexWithMinDisableTimestamp == null) {
+throw new Exception("There is no index for a datatable to 
be rebuild:" + qDataTable);
 }
+if (minDisableTimestamp == 0) {
+throw new Exception("It seems Index " + 
indexWithMinDisableTimestamp
++ " has disable timestamp as 0 , please run 
IndexTool with IndexName to build it first");
+// TODO probably we can initiate the job by ourself or can 
skip them while making the list for partial build with a warning
+}
+
+long maxTimestamp = getMaxRebuildAsyncDate(schemaName, 
disableIndexes);
+
+//serialize index maintaienr in job conf with Base64 TODO: 
Need to find better way to serialize them in conf.
+List maintainers = 
Lists.newArrayListWithExpectedSize(disabledPIndexes.size());
+for (PTable index : disabledPIndexes) {
+maintainers.add(index.getIndexMaintainer(pdataTable, 
connection.unwrap(PhoenixConnection.class)));
+}
+ImmutableBytesWr

[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-10-18 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r83827883
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/IndexTool.java ---
@@ -167,50 +180,152 @@ private void printHelpAndExit(Options options, int 
exitCode) {
 formatter.printHelp("help", options);
 System.exit(exitCode);
 }
+
+class JobFactory {
+Connection connection;
+Configuration configuration;
+private Path outputPath;
 
-@Override
-public int run(String[] args) throws Exception {
-Connection connection = null;
-try {
-CommandLine cmdLine = null;
-try {
-cmdLine = parseOptions(args);
-} catch (IllegalStateException e) {
-printHelpAndExit(e.getMessage(), getOptions());
+public JobFactory(Connection connection, Configuration 
configuration, Path outputPath) {
+this.connection = connection;
+this.configuration = configuration;
+this.outputPath = outputPath;
+
+}
+
+public Job getJob(String schemaName, String indexTable, String 
dataTable, boolean useDirectApi) throws Exception {
+if (indexTable == null) {
+return configureJobForPartialBuild(schemaName, dataTable);
+} else {
+return configureJobForAysncIndex(schemaName, indexTable, 
dataTable, useDirectApi);
 }
-final Configuration configuration = 
HBaseConfiguration.addHbaseResources(getConf());
-final String schemaName = 
cmdLine.getOptionValue(SCHEMA_NAME_OPTION.getOpt());
-final String dataTable = 
cmdLine.getOptionValue(DATA_TABLE_OPTION.getOpt());
-final String indexTable = 
cmdLine.getOptionValue(INDEX_TABLE_OPTION.getOpt());
+}
+
+private Job configureJobForPartialBuild(String schemaName, String 
dataTable) throws Exception {
 final String qDataTable = 
SchemaUtil.getQualifiedTableName(schemaName, dataTable);
-final String qIndexTable = 
SchemaUtil.getQualifiedTableName(schemaName, indexTable);
-
+final PTable pdataTable = PhoenixRuntime.getTable(connection, 
qDataTable);
 connection = ConnectionUtil.getInputConnection(configuration);
-if (!isValidIndexTable(connection, qDataTable, indexTable)) {
-throw new IllegalArgumentException(String.format(
-" %s is not an index table for %s ", qIndexTable, 
qDataTable));
+long minDisableTimestamp = HConstants.LATEST_TIMESTAMP;
+PTable indexWithMinDisableTimestamp = null;
+
+//Get Indexes in building state, minDisabledTimestamp 
+List disableIndexes = new ArrayList();
+List disabledPIndexes = new ArrayList();
+for (PTable index : pdataTable.getIndexes()) {
+if (index.getIndexState().equals(PIndexState.BUILDING)) {
+disableIndexes.add(index.getTableName().getString());
+disabledPIndexes.add(index);
+if (minDisableTimestamp > 
index.getIndexDisableTimestamp()) {
+minDisableTimestamp = 
index.getIndexDisableTimestamp();
+indexWithMinDisableTimestamp = index;
+}
+}
+}
+
+if (indexWithMinDisableTimestamp == null) {
+throw new Exception("There is no index for a datatable to 
be rebuild:" + qDataTable);
 }
+if (minDisableTimestamp == 0) {
+throw new Exception("It seems Index " + 
indexWithMinDisableTimestamp
++ " has disable timestamp as 0 , please run 
IndexTool with IndexName to build it first");
+// TODO probably we can initiate the job by ourself or can 
skip them while making the list for partial build with a warning
+}
+
+long maxTimestamp = getMaxRebuildAsyncDate(schemaName, 
disableIndexes);
+
+//serialize index maintaienr in job conf with Base64 TODO: 
Need to find better way to serialize them in conf.
+List maintainers = 
Lists.newArrayListWithExpectedSize(disabledPIndexes.size());
+for (PTable index : disabledPIndexes) {
+maintainers.add(index.getIndexMaintainer(pdataTable, 
connection.unwrap(PhoenixConnection.class)));
+}
+ImmutableBytesWr

[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-10-18 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r83832270
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java ---
@@ -305,16 +305,27 @@
 TENANT_ID + "," +
 TABLE_SCHEM + "," +
 TABLE_NAME + "," +
-INDEX_STATE +
+INDEX_STATE + "," +
+ASYNC_REBUILD_TIMESTAMP + " " + 
PLong.INSTANCE.getSqlTypeName() +
+") VALUES (?, ?, ?, ?, ?)";
+
+private static final String UPDATE_INDEX_REBUILD_ASYNC_STATE =
+"UPSERT INTO " + SYSTEM_CATALOG_SCHEMA + ".\"" + 
SYSTEM_CATALOG_TABLE + "\"( " +
+TENANT_ID + "," +
+TABLE_SCHEM + "," +
+TABLE_NAME + "," +
+ASYNC_REBUILD_TIMESTAMP + " " + 
PLong.INSTANCE.getSqlTypeName() +
 ") VALUES (?, ?, ?, ?)";
+
 private static final String UPDATE_INDEX_STATE_TO_ACTIVE =
 "UPSERT INTO " + SYSTEM_CATALOG_SCHEMA + ".\"" + 
SYSTEM_CATALOG_TABLE + "\"( " +
 TENANT_ID + "," +
 TABLE_SCHEM + "," +
 TABLE_NAME + "," +
 INDEX_STATE + "," +
-INDEX_DISABLE_TIMESTAMP +
-") VALUES (?, ?, ?, ?, ?)";
+INDEX_DISABLE_TIMESTAMP +","+
+ASYNC_REBUILD_TIMESTAMP + " " + 
PLong.INSTANCE.getSqlTypeName() +
--- End diff --

Here ASYNC_REBUILD_TIMESTAMP is dynamic column. Are there any problems if 
we use dynamic columns for system tables? why can't you make normal column?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-11-30 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r90188400
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java ---
@@ -305,16 +305,27 @@
 TENANT_ID + "," +
 TABLE_SCHEM + "," +
 TABLE_NAME + "," +
-INDEX_STATE +
+INDEX_STATE + "," +
+ASYNC_REBUILD_TIMESTAMP + " " + 
PLong.INSTANCE.getSqlTypeName() +
+") VALUES (?, ?, ?, ?, ?)";
+
+private static final String UPDATE_INDEX_REBUILD_ASYNC_STATE =
+"UPSERT INTO " + SYSTEM_CATALOG_SCHEMA + ".\"" + 
SYSTEM_CATALOG_TABLE + "\"( " +
+TENANT_ID + "," +
+TABLE_SCHEM + "," +
+TABLE_NAME + "," +
+ASYNC_REBUILD_TIMESTAMP + " " + 
PLong.INSTANCE.getSqlTypeName() +
 ") VALUES (?, ?, ?, ?)";
+
 private static final String UPDATE_INDEX_STATE_TO_ACTIVE =
 "UPSERT INTO " + SYSTEM_CATALOG_SCHEMA + ".\"" + 
SYSTEM_CATALOG_TABLE + "\"( " +
 TENANT_ID + "," +
 TABLE_SCHEM + "," +
 TABLE_NAME + "," +
 INDEX_STATE + "," +
-INDEX_DISABLE_TIMESTAMP +
-") VALUES (?, ?, ?, ?, ?)";
+INDEX_DISABLE_TIMESTAMP +","+
+ASYNC_REBUILD_TIMESTAMP + " " + 
PLong.INSTANCE.getSqlTypeName() +
--- End diff --

Ok. It's fine. Thanks Ankit



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-11-30 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r90192647
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/IndexTool.java ---
@@ -167,50 +180,152 @@ private void printHelpAndExit(Options options, int 
exitCode) {
 formatter.printHelp("help", options);
 System.exit(exitCode);
 }
+
+class JobFactory {
+Connection connection;
+Configuration configuration;
+private Path outputPath;
 
-@Override
-public int run(String[] args) throws Exception {
-Connection connection = null;
-try {
-CommandLine cmdLine = null;
-try {
-cmdLine = parseOptions(args);
-} catch (IllegalStateException e) {
-printHelpAndExit(e.getMessage(), getOptions());
+public JobFactory(Connection connection, Configuration 
configuration, Path outputPath) {
+this.connection = connection;
+this.configuration = configuration;
+this.outputPath = outputPath;
+
+}
+
+public Job getJob(String schemaName, String indexTable, String 
dataTable, boolean useDirectApi) throws Exception {
+if (indexTable == null) {
+return configureJobForPartialBuild(schemaName, dataTable);
+} else {
+return configureJobForAysncIndex(schemaName, indexTable, 
dataTable, useDirectApi);
 }
-final Configuration configuration = 
HBaseConfiguration.addHbaseResources(getConf());
-final String schemaName = 
cmdLine.getOptionValue(SCHEMA_NAME_OPTION.getOpt());
-final String dataTable = 
cmdLine.getOptionValue(DATA_TABLE_OPTION.getOpt());
-final String indexTable = 
cmdLine.getOptionValue(INDEX_TABLE_OPTION.getOpt());
+}
+
+private Job configureJobForPartialBuild(String schemaName, String 
dataTable) throws Exception {
 final String qDataTable = 
SchemaUtil.getQualifiedTableName(schemaName, dataTable);
-final String qIndexTable = 
SchemaUtil.getQualifiedTableName(schemaName, indexTable);
-
+final PTable pdataTable = PhoenixRuntime.getTable(connection, 
qDataTable);
 connection = ConnectionUtil.getInputConnection(configuration);
-if (!isValidIndexTable(connection, qDataTable, indexTable)) {
-throw new IllegalArgumentException(String.format(
-" %s is not an index table for %s ", qIndexTable, 
qDataTable));
+long minDisableTimestamp = HConstants.LATEST_TIMESTAMP;
+PTable indexWithMinDisableTimestamp = null;
+
+//Get Indexes in building state, minDisabledTimestamp 
+List disableIndexes = new ArrayList();
+List disabledPIndexes = new ArrayList();
+for (PTable index : pdataTable.getIndexes()) {
+if (index.getIndexState().equals(PIndexState.BUILDING)) {
+disableIndexes.add(index.getTableName().getString());
+disabledPIndexes.add(index);
+if (minDisableTimestamp > 
index.getIndexDisableTimestamp()) {
+minDisableTimestamp = 
index.getIndexDisableTimestamp();
+indexWithMinDisableTimestamp = index;
+}
+}
+}
+
+if (indexWithMinDisableTimestamp == null) {
+throw new Exception("There is no index for a datatable to 
be rebuild:" + qDataTable);
 }
+if (minDisableTimestamp == 0) {
+throw new Exception("It seems Index " + 
indexWithMinDisableTimestamp
++ " has disable timestamp as 0 , please run 
IndexTool with IndexName to build it first");
+// TODO probably we can initiate the job by ourself or can 
skip them while making the list for partial build with a warning
+}
+
+long maxTimestamp = getMaxRebuildAsyncDate(schemaName, 
disableIndexes);
+
+//serialize index maintaienr in job conf with Base64 TODO: 
Need to find better way to serialize them in conf.
+List maintainers = 
Lists.newArrayListWithExpectedSize(disabledPIndexes.size());
+for (PTable index : disabledPIndexes) {
+maintainers.add(index.getIndexMaintainer(pdataTable, 
connection.unwrap(PhoenixConnection.class)));
+}
+ImmutableBytesWr

[GitHub] phoenix pull request #210: PHOENIX-2890 Extend IndexTool to allow incrementa...

2016-11-30 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/210#discussion_r90192895
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolForPartialBuildIT.java
 ---
@@ -217,39 +223,47 @@ public void testSecondaryIndex() throws Exception {
 
 assertFalse(rs.next());
 
-conn.createStatement().execute(String.format("DROP INDEX  %s 
ON %s", indxTable, fullTableName));
+   // conn.createStatement().execute(String.format("DROP INDEX  %s 
ON %s", indxTable, fullTableName));
--- End diff --

We can remove this commented code.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix issue #238: PHOENIX-3690 Support declaring default values in Phoenix...

2017-04-02 Thread chrajeshbabu
Github user chrajeshbabu commented on the issue:

https://github.com/apache/phoenix/pull/238
  
+1 Kevin. Thanks for the update. Will commit it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements

2015-11-25 Thread chrajeshbabu
GitHub user chrajeshbabu opened a pull request:

https://github.com/apache/phoenix/pull/135

PHOENIX-1734 Local index improvements

Patch supports storing local indexing data in the same data table.
1) Removed code used HBase internals in balancer, split and merge.
2) Create index create column families suffix with L#  for data column 
families.
3) Changes in read and write path to use column families prefixed with L# 
for local indexes.
4) Done changes to tests.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chrajeshbabu/phoenix master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/135.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #135


commit 4e663a2479adbf3e41826f40c1b2ed6bb69d7634
Author: Rajeshbabu Chintaguntla 
Date:   2015-11-25T16:33:33Z

PHOENIX-1734 Local index improvements(Rajeshbabu)




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements

2015-11-29 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/135#discussion_r46096275
  
--- Diff: phoenix-core/src/it/java/org/apache/phoenix/end2end/DeleteIT.java 
---
@@ -186,7 +186,9 @@ private void testDeleteRange(boolean autoCommit, 
boolean createIndex, boolean lo
 PreparedStatement stmt;
 conn.setAutoCommit(autoCommit);
 deleteStmt = "DELETE FROM IntIntKeyTest WHERE i >= ? and i < ?";
-assertIndexUsed(conn, deleteStmt, Arrays.asList(5,10), 
indexName, false);
+if(!local) {
--- End diff --

It will be used. In case of local indexes explain plan always have "SCAN 
OVER DATATABLENAME" so it's failing so not checking assertion for local indexes 
case. Will modify it make proper assertion in case of local indexes like it 
should have index id as well.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements

2015-11-29 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/135#discussion_r46096395
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/HashJoinIT.java ---
@@ -873,7 +873,7 @@ public void initTable() throws Exception {
 "SERVER AGGREGATE INTO DISTINCT ROWS BY 
[\"I.0:NAME\"]\n" +
 "CLIENT MERGE SORT\n" +
 "PARALLEL LEFT-JOIN TABLE 0\n" +
-"CLIENT PARALLEL 1-WAY RANGE SCAN OVER " + 
MetaDataUtil.LOCAL_INDEX_TABLE_PREFIX + "" + JOIN_ITEM_TABLE_DISPLAY_NAME +" 
[-32768]\n" +
+"CLIENT PARALLEL 1-WAY RANGE SCAN OVER " 
+JOIN_ITEM_TABLE_DISPLAY_NAME +" [-32768]\n" +
--- End diff --

Yes it's using local indexes but it's confusing because we are having same 
table name. Only difference here is index id.  
What do you think of changing it to logical name in ExplainTable?
buf.append("OVER " + 
tableRef.getTable().getPhysicalName().getString());


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements

2015-11-29 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/135#discussion_r46096413
  
--- Diff: 
phoenix-core/src/main/java/org/apache/hadoop/hbase/regionserver/IndexHalfStoreFileReaderGenerator.java
 ---
@@ -80,6 +81,9 @@ public Reader 
preStoreFileReaderOpen(ObserverContext

[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements

2015-11-29 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/135#discussion_r46096442
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java ---
@@ -134,7 +143,36 @@ private static void setValues(byte[][] values, int[] 
pkSlotIndex, int[] columnIn
 }
 }
 ImmutableBytesPtr ptr = new ImmutableBytesPtr();
-table.newKey(ptr, pkValues);
+if(table.getIndexType()==IndexType.LOCAL) {
--- End diff --

We are preparing index updates server side only. But I have added this to 
prepare proper local index mutations when we run upsert into index table 
directly in any case if required. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements

2015-11-29 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/135#discussion_r46096498
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 ---
@@ -158,6 +162,10 @@ public RegionScanner 
preScannerOpen(ObserverContext

[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements

2015-11-29 Thread chrajeshbabu
Github user chrajeshbabu commented on the pull request:

https://github.com/apache/phoenix/pull/135#issuecomment-160454376
  
Thanks for review @JamesRTaylor. Please find my answers to the comments.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements

2015-11-29 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/135#discussion_r46111214
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 ---
@@ -158,6 +162,10 @@ public RegionScanner 
preScannerOpen(ObserverContext

[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements

2015-11-29 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/135#discussion_r46111265
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/UpsertCompiler.java ---
@@ -134,7 +143,36 @@ private static void setValues(byte[][] values, int[] 
pkSlotIndex, int[] columnIn
 }
 }
 ImmutableBytesPtr ptr = new ImmutableBytesPtr();
-table.newKey(ptr, pkValues);
+if(table.getIndexType()==IndexType.LOCAL) {
--- End diff --

bq.  so I don't think we should include this code.
Will remove in the next patch.

bq. We've talked about having a kind of "scrutiny" process that scans the 
data table and ensures that all the corresponding indexes have the correct rows 
(I filed PHOENIX-2460 for this). 
+1 on this.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-11 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/1#discussion_r14834683
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java 
---
@@ -51,21 +51,38 @@ public static void doSetup() throws Exception {
 props.put(QueryServices.MAX_INTRA_REGION_PARALLELIZATION_ATTRIB, 
Integer.toString(1));
 // Forces server cache to be used
 props.put(QueryServices.INDEX_MUTATE_BATCH_SIZE_THRESHOLD_ATTRIB, 
Integer.toString(2));
+props.put(QueryServices.DROP_METADATA_ATTRIB, 
Boolean.toString(true));
 // Must update config before starting server
 startServer(getUrl(), new 
ReadOnlyProps(props.entrySet().iterator()));
 }
-
+
 @Test
 public void testIndexWithNullableFixedWithCols() throws Exception {
+testIndexWithNullableFixedWithCols(false);
+}
+
+@Test
+public void testLocalIndexWithNullableFixedWithCols() throws Exception 
{
+testIndexWithNullableFixedWithCols(true);
+}
+
+private void testIndexWithNullableFixedWithCols(boolean localIndex) 
throws Exception {
 Properties props = new Properties(TEST_PROPERTIES);
 Connection conn = DriverManager.getConnection(getUrl(), props);
 conn.setAutoCommit(false);
 try {
 createTestTable();
 populateTestTable();
-String ddl = "CREATE INDEX " + INDEX_TABLE_NAME + " ON " + 
DATA_TABLE_FULL_NAME
-+ " (char_col1 ASC, int_col1 ASC)"
-+ " INCLUDE (long_col1, long_col2)";
+String ddl = null;
+if(localIndex){
+ddl = "CREATE INDEX " + INDEX_TABLE_NAME + " ON " + 
DATA_TABLE_FULL_NAME
--- End diff --

Somehow missed this. Corrected.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-11 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/1#discussion_r14834867
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/CreateIndexCompiler.java 
---
@@ -47,6 +51,21 @@ public MutationPlan compile(final CreateIndexStatement 
create) throws SQLExcepti
 final StatementContext context = new StatementContext(statement, 
resolver, scan);
 ExpressionCompiler expressionCompiler = new 
ExpressionCompiler(context);
 List splitNodes = create.getSplitNodes();
+if (create.getIndexType() == IndexType.LOCAL) {
+if (!splitNodes.isEmpty()) {
+throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.CANNOT_SPLIT_LOCAL_INDEX)
+.build().buildException();
+} 
+if (create.getProps() != null && create.getProps().get("") != 
null) {
+List> list = 
create.getProps().get("");
--- End diff --

corrected.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-11 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/1#discussion_r14834896
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/ExpressionCompiler.java 
---
@@ -282,7 +286,7 @@ public Expression visitLeave(FunctionParseNode node, 
List children)
 children = node.validate(children, context);
 Expression expression = node.create(children, context);
 ImmutableBytesWritable ptr = context.getTempPtr();
-if (node.isStateless()) {
+if (node.isStateless() && expression.isDeterministic()) {
--- End diff --

Yes James. This change already there in master branch


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-11 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/1#discussion_r14834905
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/ExpressionCompiler.java 
---
@@ -320,7 +322,21 @@ public Expression visitLeave(FunctionParseNode node, 
List children)
  * @throws SQLException if the column expression node does not refer 
to a known/unambiguous column
  */
 protected ColumnRef resolveColumn(ColumnParseNode node) throws 
SQLException {
-ColumnRef ref = 
context.getResolver().resolveColumn(node.getSchemaName(), node.getTableName(), 
node.getName());
+ColumnRef ref = null;
+try {
+ref = 
context.getResolver().resolveColumn(node.getSchemaName(), node.getTableName(), 
node.getName());
+} catch (ColumnNotFoundException e) {
+// If local index table (need to test join case here)
--- End diff --

Added the comment.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-11 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/1#discussion_r14834954
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/IndexStatementRewriter.java
 ---
@@ -96,6 +96,12 @@ public ParseNode visit(ColumnParseNode node) throws 
SQLException {
 
 String indexColName = IndexUtil.getIndexColumnName(dataCol);
 // Same alias as before, but use the index column name instead of 
the data column name
+// TODO: add dataColRef as an alternate ColumnParseNode in the 
case that the index
--- End diff --

removed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-11 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/1#discussion_r14835021
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/JoinCompiler.java ---
@@ -1037,8 +1038,13 @@ public static SelectStatement 
optimize(StatementContext context, SelectStatement
 List orderBy = tableRef.equals(orderByTableRef) ? 
select.getOrderBy() : null;
 SelectStatement stmt = 
getSubqueryForOptimizedPlan(select.getHint(), table.getDynamicColumns(), 
tableRef, join.getColumnRefs(), table.getPreFiltersCombined(), groupBy, 
orderBy, table.isWildCardSelect());
 QueryPlan plan = 
context.getConnection().getQueryServices().getOptimizer().optimize(statement, 
stmt);
+boolean localIndex = 
plan.getContext().getCurrentTable().getTable().getIndexType()==IndexType.LOCAL;
+Scan scan = plan.getContext().getScan();
+boolean useLocalIndex = localIndex && 
plan.getContext().getDataColumns().isEmpty();
 if (!plan.getTableRef().equals(tableRef)) {
-replacement.put(tableRef, plan.getTableRef());
+if (!localIndex || useLocalIndex) {
--- End diff --

comment added.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-11 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/1#discussion_r14835033
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/StatementContext.java ---
@@ -81,6 +93,20 @@ public StatementContext(PhoenixStatement statement, 
ColumnResolver resolver, Sca
 this.currentTable = resolver != null && 
!resolver.getTables().isEmpty() ? resolver.getTables().get(0) : null;
 this.sequences = new SequenceManager(statement);
 this.whereConditionColumns = new ArrayList>();
+this.dataColumns = this.currentTable == null ? 
Collections.emptyMap() : Maps.newLinkedHashMap();
+}
+
+public int getDataColumnPosition(PColumn column) {
--- End diff --

done.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-11 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/1#discussion_r14835161
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/TrackOrderPreservingExpressionCompiler.java
 ---
@@ -69,6 +70,7 @@
 boolean isSharedViewIndex = table.getViewIndexId() != null;
 // TODO: util for this offset, as it's computed in numerous places
 positionOffset = (isSalted ? 1 : 0) + (isMultiTenant ? 1 : 0) + 
(isSharedViewIndex ? 1 : 0);
+this.isOrderPreserving &= table.getIndexType() != IndexType.LOCAL;
--- End diff --

Thanks for pointing this James. Yes merge sort is fine. Done the changes 
locally.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-11 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/1#discussion_r14850777
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
 ---
@@ -366,6 +384,21 @@ private RegionScanner 
scanUnordered(ObserverContext

[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-11 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/1#discussion_r14850781
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
 ---
@@ -387,6 +420,11 @@ private RegionScanner 
scanUnordered(ObserverContext

[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-11 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/1#discussion_r14850792
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/execute/BasicQueryPlan.java ---
@@ -141,6 +163,25 @@ public final ResultIterator iterator(final 
List dependencies) thro
 Long scn = connection.getSCN();
 ScanUtil.setTimeRange(scan, scn == null ? context.getCurrentTime() 
: scn);
 ScanUtil.setTenantId(scan, connection.getTenantId() == null ? null 
: connection.getTenantId().getBytes());
+if (context.getCurrentTable().getTable().getIndexType() == 
IndexType.LOCAL) {
--- End diff --

done.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-11 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/1#discussion_r14850789
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/ScanRegionObserver.java
 ---
@@ -100,7 +109,7 @@ public static void serializeIntoScan(Scan scan, int 
thresholdBytes, int limit, L
 }
 }
 
-public static OrderedResultIterator deserializeFromScan(Scan scan, 
RegionScanner s) {
+public static OrderedResultIterator deserializeFromScan(Scan scan, 
RegionScanner s, int offset) {
--- End diff --

This I will verify James.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-11 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/1#discussion_r14850794
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/ColumnRef.java ---
@@ -113,7 +117,12 @@ public ColumnExpression newColumnExpression() {
 String displayName = 
SchemaUtil.getColumnDisplayName(defaultFamilyName.equals(dataFamilyName) ? null 
: dataFamilyName, dataColumnName);
return new KeyValueColumnExpression(column, displayName);
 }
-
+
--- End diff --

Already done in master.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-11 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/1#discussion_r14850795
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/schema/LocalIndexDataColumnRef.java
 ---
@@ -0,0 +1,49 @@
+package org.apache.phoenix.schema;
+
+import java.sql.SQLException;
+import java.util.Set;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.compile.FromCompiler;
+import org.apache.phoenix.compile.StatementContext;
+import org.apache.phoenix.expression.ColumnExpression;
+import org.apache.phoenix.expression.ProjectedColumnExpression;
+import org.apache.phoenix.parse.ParseNodeFactory;
+import org.apache.phoenix.parse.TableName;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.util.IndexUtil;
+import org.apache.phoenix.util.SchemaUtil;
+
+public class LocalIndexDataColumnRef extends ColumnRef {
+final private int position;
+final private Set columns;
+private static final ParseNodeFactory FACTORY = new ParseNodeFactory();
+
+// TODO: Need a good way to clone this - maybe implement Cloneable 
instead
--- End diff --

done.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-11 Thread chrajeshbabu
Github user chrajeshbabu commented on the pull request:

https://github.com/apache/phoenix/pull/1#issuecomment-48798867
  
Thanks for review @JamesRTaylor. I have resolved the conflicts  and handled 
all the comments locally. I will submit it once I verify OrderedResultIterator 
scenario.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   >