[
https://issues.apache.org/jira/browse/PHOENIX-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16356728#comment-16356728
]
ASF GitHub Bot commented on PHOENIX-4278:
-----------------------------------------
Github user ohadshacham commented on a diff in the pull request:
https://github.com/apache/phoenix/pull/291#discussion_r166876845
--- Diff:
phoenix-core/src/main/java/org/apache/phoenix/execute/PhoenixTxnIndexMutationGenerator.java
---
@@ -0,0 +1,505 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.execute;
+
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+import com.google.common.collect.Sets;
+import com.google.common.primitives.Longs;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.*;
+import org.apache.hadoop.hbase.client.*;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.phoenix.compile.ScanRanges;
+import org.apache.phoenix.coprocessor.BaseScannerRegionObserver;
+import org.apache.phoenix.filter.SkipScanFilter;
+import org.apache.phoenix.hbase.index.MultiMutation;
+import org.apache.phoenix.hbase.index.ValueGetter;
+import org.apache.phoenix.hbase.index.covered.IndexMetaData;
+import org.apache.phoenix.hbase.index.covered.IndexUpdate;
+import org.apache.phoenix.hbase.index.covered.TableState;
+import org.apache.phoenix.hbase.index.covered.update.ColumnReference;
+import org.apache.phoenix.hbase.index.covered.update.ColumnTracker;
+import org.apache.phoenix.hbase.index.covered.update.IndexedColumnGroup;
+import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
+import org.apache.phoenix.index.IndexMaintainer;
+import org.apache.phoenix.index.PhoenixIndexCodec;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.query.KeyRange;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.types.PVarbinary;
+import org.apache.phoenix.transaction.PhoenixTransactionContext;
+import
org.apache.phoenix.transaction.PhoenixTransactionContext.PhoenixVisibilityLevel;
+import org.apache.phoenix.util.ScanUtil;
+import org.apache.phoenix.util.SchemaUtil;
+
+import java.io.IOException;
+import java.sql.SQLException;
+import java.util.*;
+
+
+public class PhoenixTxnIndexMutationGenerator {
--- End diff --
As you saw, I basically imported the code and removed the coprocessor
specific stuff. While, when needed, finding this stuff at the client side :).
Basically, I replaced stuff that arrived from the coprocessor environment and
refactored the code.
> Implement pure client side transactional index maintenance
> ----------------------------------------------------------
>
> Key: PHOENIX-4278
> URL: https://issues.apache.org/jira/browse/PHOENIX-4278
> Project: Phoenix
> Issue Type: Improvement
> Reporter: James Taylor
> Assignee: Ohad Shacham
> Priority: Major
>
> The index maintenance for transactions follows the same model as non
> transactional tables - coprocessor based on data table updates that looks up
> previous row value to perform maintenance. This is necessary for non
> transactional tables to ensure the rows are locked so that a consistent view
> may be obtained. However, for transactional tables, the time stamp oracle
> ensures uniqueness of time stamps (via transaction IDs) and the filtering
> handles a scan seeing the "true" last committed value for a row. Thus,
> there's no hard dependency to perform this on the server side.
> Moving the index maintenance to the client side would prevent any RS->RS RPC
> calls (which have proved to be troublesome for HBase). It would require
> returning more data to the client (i.e. the prior row value), but this seems
> like a reasonable tradeoff.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)