[ 
https://issues.apache.org/jira/browse/PHOENIX-6491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17560868#comment-17560868
 ] 

ASF GitHub Bot commented on PHOENIX-6491:
-----------------------------------------

dbwong commented on code in PR #1430:
URL: https://github.com/apache/phoenix/pull/1430#discussion_r910690607


##########
phoenix-core/src/test/java/org/apache/phoenix/jdbc/FailoverPhoenixConnectionTest.java:
##########
@@ -0,0 +1,217 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.jdbc;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+import static org.mockito.Matchers.any;
+import static org.mockito.Mockito.doThrow;
+import static org.mockito.Mockito.never;
+import static org.mockito.Mockito.times;
+import static org.mockito.Mockito.verify;
+import static org.mockito.Mockito.when;
+
+import java.sql.Connection;
+import java.sql.SQLException;
+import java.util.Properties;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.junit.Before;
+import org.junit.Test;
+import org.mockito.Mock;
+import org.mockito.MockitoAnnotations;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.phoenix.exception.FailoverSQLException;
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.jdbc.HighAvailabilityGroup.HAGroupInfo;
+
+/**
+ * Unit test for {@link FailoverPhoenixConnection}.
+ *
+ * @see FailoverPhoenixConnectionIT
+ */
+public class FailoverPhoenixConnectionTest {
+    private static final Logger LOG = 
LoggerFactory.getLogger(FailoverPhoenixConnectionTest.class);
+
+    @Mock PhoenixConnection connection1;
+    @Mock PhoenixConnection connection2;
+    @Mock HighAvailabilityGroup haGroup;
+
+    final HAGroupInfo haGroupInfo = new HAGroupInfo("fake", "zk1", "zk2");
+    FailoverPhoenixConnection failoverConnection; // this connection itself is 
not mocked or spied.
+
+    @Before
+    public void init() throws SQLException {
+        MockitoAnnotations.initMocks(this);
+        when(haGroup.getGroupInfo()).thenReturn(haGroupInfo);
+        
when(haGroup.connectActive(any(Properties.class))).thenReturn(connection1);
+
+        failoverConnection = new FailoverPhoenixConnection(haGroup, new 
Properties());
+    }
+
+    /**
+     * Test helper method {@link 
FailoverPhoenixConnection#wrapActionDuringFailover}.
+     */
+    @Test
+    public void testWrapActionDuringFailover() throws SQLException {
+        // Test SupplierWithSQLException which returns a value
+        String str = "Hello, World!";
+        assertEquals(str, failoverConnection.wrapActionDuringFailover(() -> 
str));
+
+        // Test RunWithSQLException which does not return value
+        final AtomicInteger counter = new AtomicInteger(0);
+        failoverConnection.wrapActionDuringFailover(counter::incrementAndGet);
+        assertEquals(1, counter.get());
+    }
+
+    /**
+     * Test that after calling failover(), the old connection got closed with 
FailoverSQLException,
+     * and a new Phoenix connection is opened.
+     */
+    @Test
+    public void testFailover() throws SQLException {
+        // Make HAGroup return a different phoenix connection when it gets 
called next time
+        
when(haGroup.connectActive(any(Properties.class))).thenReturn(connection2);
+
+        // explicit call failover
+        failoverConnection.failover(1000L);
+
+        // The old connection should have been closed due to failover
+        verify(connection1, times(1)).close(any(FailoverSQLException.class));
+        // A new Phoenix connection is wrapped underneath
+        assertEquals(connection2, failoverConnection.getWrappedConnection());
+    }
+
+    /**
+     * Test static {@link FailoverPhoenixConnection#failover(Connection, 
long)} method.
+     */
+    @Test
+    public void testFailoverStatic() throws SQLException {
+        try {
+            FailoverPhoenixConnection.failover(connection1, 1000L);
+            fail("Should have failed since plain phoenix connection can not 
failover!");
+        } catch (SQLException e) {
+            LOG.info("Got expected exception when trying to failover on non-HA 
connection", e);

Review Comment:
   Done





> Phoenix High Availability
> -------------------------
>
>                 Key: PHOENIX-6491
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-6491
>             Project: Phoenix
>          Issue Type: New Feature
>          Components: core
>            Reporter: Daniel Wong
>            Assignee: Daniel Wong
>            Priority: Major
>
> This JIRA proposes Phoenix High Availability (HA) feature which allows 
> Phoenix users to interact with multiple Phoenix/HBase clusters in order to 
> achieve additional availability compared to a single cluster. In particular 
> we target the common deployment configuration of 2 HBase clusters with 
> master/master asynchronous replication enabled between the queried tables, 
> but with consideration to future extensions in use cases, replication, and 
> number of clusters.
> Currently targeted use cases:
>  # Active-Standby HA for disaster recovery, enables end users to switch HBase 
> clusters (triggered by administrators) collectively across multiple clients 
> without restarting.
>  # Active-Active HA for immutable use cases with point get queries without 
> deletes, enables a client to connect to both clusters simultaneously for 
> these use cases which inherently have relaxed consistency requirements.
> Concepts:
>  * HA Group - An HA group is an association between a pair of HBase clusters, 
> a group of Phoenix clients, metadata state, and an HA policy (see below). HA 
> groups are pre-defined and a client provides the group name when creating a 
> phoenix connection to the clusters in that HA group. Note that the same pair 
> of HBase clusters can be in multiple HA groups. This allows clients to be 
> grouped based on different use cases, availability requirements, consistency 
> requirements, and load balancing.
>  * HA Policy - Every HA group has an associated HA policy which specifies how 
> to use the HBase clusters pair. This is implemented by an interface that 
> replaces the JDBC Connection as well as changes in the public APIs in 
> QueryServices. Currently, there are 2 policies one for each targeted use case 
> defined above. It is possible to support more HA policies in future for 
> incoming use cases.
>  * Metadata Store - Stores the state / manages the state transitions of an HA 
> group. For example in the Active-Standby setup the store manages which 
> cluster is currently Active to which all clients in that HA group should 
> connect. For a particular HA group an entry is referred to as a Cluster Role 
> Record.
>  * HA Client - JDBC implementation as well as a handler for metadata store 
> state changes. End users will use via PhoenixDriver with JDBC string with 
> special format {{jdbc:phoenix:[zk1,zk2,zk3|zk1',zk2',zk3']}} and HA group 
> name in the connection properties. Using such a JDBC connection for creating 
> {{Statement}} or querying a {{ResultSet}} does not require any application 
> code change. Internally, the implementation will serve incoming client 
> operation requests according to the HA policy of that group.
> More details to come with a detailed design document.
> Not Supported: 
>     Multiple Kerberos Authentication, each cluster must use the same auth.  
> This could be addressed in a future release.
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to