virajjasani commented on code in PR #2033: URL: https://github.com/apache/phoenix/pull/2033#discussion_r1901255834
########## phoenix-core-server/src/main/java/org/apache/phoenix/coprocessor/tasks/CdcStreamPartitionMetadataTask.java: ########## @@ -0,0 +1,153 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.phoenix.coprocessor.tasks; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hbase.HBaseConfiguration; +import org.apache.hadoop.hbase.HRegionLocation; +import org.apache.hadoop.hbase.client.RegionInfo; +import org.apache.phoenix.coprocessor.TaskRegionObserver; +import org.apache.phoenix.jdbc.PhoenixConnection; +import org.apache.phoenix.schema.PTable; +import org.apache.phoenix.schema.task.ServerTask; +import org.apache.phoenix.schema.task.SystemTaskParams; +import org.apache.phoenix.schema.task.Task; +import org.apache.phoenix.util.CDCUtil; +import org.apache.phoenix.util.QueryUtil; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.IOException; +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.Timestamp; +import java.util.List; + +import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CDC_STREAM_NAME; +import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CDC_STREAM_STATUS_NAME; +import static org.apache.phoenix.query.QueryServices.PHOENIX_STREAMS_GET_TABLE_REGIONS_TIMEOUT; +import static org.apache.phoenix.query.QueryServicesOptions.DEFAULT_PHOENIX_STREAMS_GET_TABLE_REGIONS_TIMEOUT; + +/** + * Task to bootstrap partition metadata when CDC is enabled on a table. + * Upserts one row for each region of the table into SYSTEM.CDC_STREAM and marks the status as + * ENABLED in SYSTEM.CDC_STREAM_STATUS. + */ +public class CdcStreamPartitionMetadataTask extends BaseTask { + + public static final Logger LOGGER = LoggerFactory.getLogger(CdcStreamPartitionMetadataTask.class); + private static final String CDC_STREAM_STATUS_UPSERT_SQL + = "UPSERT INTO " + SYSTEM_CDC_STREAM_STATUS_NAME + " VALUES (?, ?, ?)"; + + // parent_partition_id will be null, set partition_end_time to -1 + private static final String CDC_STREAM_PARTITION_UPSERT_SQL + = "UPSERT INTO " + SYSTEM_CDC_STREAM_NAME + " VALUES (?,?,?,null,?,-1,?,?)"; + + @Override + public TaskRegionObserver.TaskResult run(Task.TaskRecord taskRecord) { + Configuration conf = HBaseConfiguration.create(env.getConfiguration()); + Configuration configuration = HBaseConfiguration.addHbaseResources(conf); + int getTableRegionsTimeout = configuration.getInt(PHOENIX_STREAMS_GET_TABLE_REGIONS_TIMEOUT, + DEFAULT_PHOENIX_STREAMS_GET_TABLE_REGIONS_TIMEOUT); + PhoenixConnection pconn = null; + String tableName = taskRecord.getTableName(); + String streamName = taskRecord.getSchemaName(); + Timestamp timestamp = taskRecord.getTimeStamp(); + try { + pconn = QueryUtil.getConnectionOnServer(env.getConfiguration()).unwrap(PhoenixConnection.class); + List<HRegionLocation> tableRegions = pconn.getQueryServices().getAllTableRegions( + tableName.getBytes(), getTableRegionsTimeout); Review Comment: @haridsv I just had offline discussion with @palashc. Since he is already using the existing API which is already used by `BaseResultIterators` for each query. The timeout is not for HBase ConnectionRegistry API but its for Phoenix client to internally figure out if meta RPC calls are exceeding some timeout. For very large tables with full table scan, while going through the loop (CQSI#getTableRegions), the timeout can happen only if the region locations are not cached. Let's assume we have 500k regions with 2 min timeout set by the CDC Stream Metadata Task. Let's say none of those regions are cached at the moment. So, CQSI#getTableRegions starts RPC calls for each region and somehow meta is slow to respond and hence after 200k region locations, CQSI exceeds 2 min timeout and throws error. This means we already have 200k region locations cached at the client side. Hence, next time when the task is retried, CQSI#getTableRegions will go through 200k region location loop in few milliseconds because no RPC call is required. For the remaining regions, it will start creating RPC calls, until every region location is cached. Given that this API is already being used by Phoenix client with every query, we are confident that it is already working fine and even for large tables, it can potentially timeout only few times, not all time. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
