joao-r-reis commented on code in PR #1901:
URL: 
https://github.com/apache/cassandra-gocql-driver/pull/1901#discussion_r2382750209


##########
doc.go:
##########
@@ -326,24 +559,129 @@
 // With single-partition batches you can send the batch directly to the node 
for the partition without incurring the
 // additional network hop.
 //
-// It is also possible to pass entire BEGIN BATCH .. APPLY BATCH statement to 
Query.Exec.
+// It is also possible to pass entire BEGIN BATCH .. APPLY BATCH statement to 
[Query.Exec].
 // There are differences how those are executed.
 // BEGIN BATCH statement passed to Query.Exec is prepared as a whole in a 
single statement.
 // Batch.Exec prepares individual statements in the batch.
 // If you have variable-length batches using the same statement, using 
Batch.Exec is more efficient.
 //
 // See Example_batch for an example.
 //
+// The [Batch] API provides a fluent interface for building and executing 
batch operations:
+//
+//     // Create and execute a batch using fluent API
+//     err := session.Batch(LoggedBatch).
+//             Query("INSERT INTO table1 (id, name) VALUES (?, ?)", id1, 
name1).
+//             Query("INSERT INTO table2 (id, value) VALUES (?, ?)", id2, 
value2).
+//             Exec()
+//
+//     // Lightweight transactions with batches
+//     applied, iter, err := session.Batch(LoggedBatch).
+//             Query("INSERT INTO users (id, name) VALUES (?, ?) IF NOT 
EXISTS", id, name).
+//             ExecCAS()
+//     if err != nil {
+//             // handle error
+//     }
+//     if !applied {
+//             // handle conditional failure
+//     }
+//
 // # Lightweight transactions
 //
-// Query.ScanCAS or Query.MapScanCAS can be used to execute a single-statement 
lightweight transaction (an
+// [Query.ScanCAS] or [Query.MapScanCAS] can be used to execute a 
single-statement lightweight transaction (an
 // INSERT/UPDATE .. IF statement) and reading its result. See example for 
Query.MapScanCAS.
 //
 // Multiple-statement lightweight transactions can be executed as a logged 
batch that contains at least one conditional
-// statement. All the conditions must return true for the batch to be applied. 
You can use Batch.ExecCAS and
-// Batch.MapExecCAS when executing the batch to learn about the result of the 
LWT. See example for
+// statement. All the conditions must return true for the batch to be applied. 
You can use [Batch.ExecCAS] and
+// [Batch.MapExecCAS] when executing the batch to learn about the result of 
the LWT. See example for
 // Batch.MapExecCAS.
 //
+// # SERIAL Consistency for Reads
+//
+// The driver supports SERIAL and LOCAL_SERIAL consistency levels on SELECT 
statements.
+// These special consistency levels are designed for reading data that may 
have been written using
+// lightweight transactions (LWT) with IF conditions, providing linearizable 
consistency guarantees.
+//
+// When to use SERIAL consistency levels:
+//
+// Use SERIAL or LOCAL_SERIAL consistency when you need to:
+//   - Read the most recent committed value after lightweight transactions
+//   - Ensure linearizable consistency (stronger than eventual consistency)
+//   - Read data that might have uncommitted lightweight transactions in 
progress
+//
+// Important considerations:
+//
+//   - SERIAL reads have higher latency and resource usage than normal reads
+//   - Only use when you specifically need linearizable consistency
+//   - If a SERIAL read finds an uncommitted transaction, it will commit that 
transaction
+//   - Most applications should use regular consistency levels (ONE, QUORUM, 
etc.)
+//
+// # Immutable Execution
+//
+// [Query] and [Batch] objects follow an immutable execution model that 
enables safe reuse and concurrent
+// execution without object mutation.
+//
+// Query and Batch Object Reusability:

Review Comment:
   added a section for this



##########
doc.go:
##########
@@ -50,25 +54,230 @@
 // protocol version explicitly, as it's not defined which version will be used 
in certain situations (for example
 // during upgrade of the cluster when some of the nodes support different set 
of protocol versions than other nodes).
 //
+// Native protocol versions 3, 4, and 5 are supported.
+// For features like per-query keyspace setting and timestamp override, use 
native protocol version 5.
+//
 // The driver advertises the module name and version in the STARTUP message, 
so servers are able to detect the version.
 // If you use replace directive in go.mod, the driver will send information 
about the replacement module instead.
 //
-// When ready, create a session from the configuration. Don't forget to Close 
the session once you are done with it:
+// When ready, create a session from the configuration. Don't forget to 
[Session.Close] the session once you are done with it:
 //
 //     session, err := cluster.CreateSession()
 //     if err != nil {
 //             return err
 //     }
 //     defer session.Close()
 //
+// # Reconnection and Host Recovery
+//
+// The driver provides robust reconnection mechanisms to handle network 
failures and host outages.
+// Two main configuration settings control reconnection behavior:
+//
+//   - ClusterConfig.ReconnectionPolicy: Controls retry behavior for immediate 
connection failures, query-driven reconnection, and background recovery
+//   - ClusterConfig.ReconnectInterval: Controls background recovery of DOWN 
hosts
+//
+// [ReconnectionPolicy] controls retry behavior for immediate connection 
failures, query-driven reconnection, and background recovery.
+//
+// [ConstantReconnectionPolicy] provides predictable fixed intervals (Default):
+//
+//     cluster.ReconnectionPolicy = &gocql.ConstantReconnectionPolicy{
+//         MaxRetries: 3,               // Maximum retry attempts
+//         Interval:   1 * time.Second, // Fixed interval between retries
+//     }
+//
+// [ExponentialReconnectionPolicy] provides gentler backoff with capped 
intervals:
+//
+//     cluster.ReconnectionPolicy = &gocql.ExponentialReconnectionPolicy{
+//         MaxRetries:      5,                // 6 total attempts: 
0+1+2+4+8+15 = 30s total
+//         InitialInterval: 1 * time.Second,  // Initial retry interval
+//         MaxInterval:     15 * time.Second, // Maximum retry interval 
(prevents excessive delays)
+//     }
+//
+// Note: Each reconnection attempt sequence starts fresh from InitialInterval.
+// This applies both to immediate connection failures and each 
ClusterConfig.ReconnectInterval cycle.
+// For example, if ClusterConfig.ReconnectInterval=60s, every 60 seconds the 
background process
+// starts a new sequence beginning at InitialInterval, not continuing from 
where
+// the previous 60-second cycle ended.
+//
+// ClusterConfig.ReconnectInterval controls background recovery of DOWN hosts. 
When a host is marked DOWN, this process periodically
+// attempts reconnection using the same ReconnectionPolicy settings:
+//
+//     cluster.ReconnectInterval = 60 * time.Second  // Check DOWN hosts every 
60 seconds (default)
+//
+// Setting ClusterConfig.ReconnectInterval to 0 disables background 
reconnection.
+//
+// The reconnection process involves several components working together in a 
specific sequence:
+//
+//  1. Individual Connection Reconnection - Immediate retry attempts for 
failed connections within UP hosts
+//  2. Host State Management - Marking hosts DOWN when all connections fail 
and retries are exhausted
+//  3. Background Recovery - Periodic reconnection attempts for DOWN hosts via 
ReconnectInterval
+//
+// Individual connection reconnection occurs when connections fail within a 
host's pool, and the driver immediately attempts
+// reconnection using ReconnectionPolicy. For hosts that remain UP (with 
working connections), failed individual connections
+// are reconnected on a query-driven basis - every query execution triggers 
asynchronous reconnection attempts for missing
+// connections. Queries proceed immediately using available connections while 
reconnection happens asynchronously in the
+// background. There is no query latency impact from reconnection attempts. 
Multiple concurrent queries to the same host
+// will not trigger parallel reconnection attempts - the driver uses a 
"filling" flag to ensure only one reconnection process runs per host.
+//
+// Host state management determines when a host is marked DOWN. Only when ALL 
connections to a host fail and ReconnectionPolicy
+// retries are exhausted does the host get marked DOWN. DOWN hosts are 
excluded from query routing. Since DOWN hosts don't
+// receive queries, they cannot benefit from query-driven reconnection. This 
is why the background ClusterConfig.ReconnectInterval process
+// is essential for DOWN host recovery.
+//
+// Background recovery through ClusterConfig.ReconnectInterval periodically 
attempts to reconnect DOWN hosts using ReconnectionPolicy settings.
+// Event-driven recovery also triggers immediate reconnection when Cassandra 
sends STATUS_CHANGE UP events.
+//
+// The complete recovery process follows these steps:
+//
+//  1. Connection fails → ReconnectionPolicy immediate retry attempts
+//  2. Query-driven recovery → Each query to partially-failed hosts triggers 
reconnection attempts
+//  3. Host marked DOWN → All connections failed and retries exhausted
+//  4. Background recovery → ClusterConfig.ReconnectInterval process attempts 
reconnection using ReconnectionPolicy
+//  5. Event recovery → Cassandra events can trigger immediate reconnection
+//
+// Here's a practical example showing how the settings work together:
+//
+//     cluster.ReconnectionPolicy = &gocql.ExponentialReconnectionPolicy{
+//         MaxRetries:      8,                // 9 total attempts (0s, 1s, 2s, 
4s, 8s, 16s, 30s, 30s, 30s)
+//         InitialInterval: 1 * time.Second,  // Starts at 1 second
+//         MaxInterval:     30 * time.Second, // Caps exponential growth at 30 
seconds
+//     }
+//
+//     cluster.ReconnectInterval = 60 * time.Second  // Background checks 
every 60 seconds
+//
+// Timeline Example: With this configuration, when a host loses ALL 
connections:
+//
+//     T=0:00      - Host has 2 connections, both fail
+//     T=0:00      - Immediate reconnection attempt 1: 0s delay
+//     T=0:01      - Immediate reconnection attempt 2: 1s delay
+//     T=0:03      - Immediate reconnection attempt 3: 2s delay
+//     T=0:07      - Immediate reconnection attempt 4: 4s delay
+//     T=0:15      - Immediate reconnection attempt 5: 8s delay
+//     T=0:31      - Immediate reconnection attempt 6: 16s delay
+//     T=1:01      - Immediate reconnection attempt 7: 30s delay (capped by 
MaxInterval)
+//     T=1:31      - Immediate reconnection attempt 8: 30s delay
+//     T=2:01      - Immediate reconnection attempt 9: 30s delay
+//     T=2:31      - All immediate attempts failed, host marked DOWN
+//
+//     T=3:31      - Background recovery attempt 1 starts (60s after DOWN)
+//                 ReconnectionPolicy sequence: 0s, 1s, 2s, 4s, 8s, 16s, 30s, 
30s, 30s
+//
+//     T=4:31      - ClusterConfig.ReconnectInterval timer fires, tick 
buffered (timer channel capacity=1)
+//     T=5:31      - ClusterConfig.ReconnectInterval timer fires again, there 
is already a tick buffered so ignore
+//     T=5:32      - Background recovery attempt 1 completes (after 2:01), 
immediately reads buffered tick
+//     T=5:32      - Background recovery attempt 2 starts (buffered timer from 
T=5:31)
+//     T=6:32      - ClusterConfig.ReconnectInterval timer fires, tick buffered
+//     T=7:32      - ClusterConfig.ReconnectInterval timer fires again, there 
is already a tick buffered so ignore
+//     T=7:33      - Background recovery attempt 2 completes (after 2:01), 
immediately reads buffered tick
+//     T=7:33      - Background recovery attempt 3 starts (buffered timer from 
T=7:32)
+//
+// Timer Behavior and Predictable Timing:
+//
+// Note: [time.Ticker].C has buffer capacity=1, but Go drops ticks for "slow 
receivers."
+// The reconnection process is a slow receiver (taking 2+ minutes vs 60s 
interval).
+// First missed tick gets buffered, subsequent ticks are dropped. When 
reconnection
+// completes, it immediately reads the buffered tick and starts the next 
attempt.
+// This causes attempts to run back-to-back at the ReconnectionPolicy duration 
interval
+// (121s) instead of the intended ClusterConfig.ReconnectInterval (60s), but 
timing remains predictable.
+//
+// To avoid this buffering/dropping behavior, ensure 
ClusterConfig.ReconnectInterval is larger than the
+// total ReconnectionPolicy duration. You can achieve this by either:
+//
+//  1. Increasing ClusterConfig.ReconnectInterval (e.g., 150s > 121s sequence 
duration)
+//  2. Reducing ReconnectionPolicy duration (e.g., 30s sequence < 60s 
ClusterConfig.ReconnectInterval)
+//
+// This ensures predictable timing with each recovery attempt starting exactly 
ClusterConfig.ReconnectInterval apart.
+// Approach #2 provides faster recovery while maintaining predictable timing.
+//
+// Individual failed connections within UP hosts are reconnected 
asynchronously without affecting query performance.
+//
+// Best Practices and Configuration Guidelines:
+//
+//   - ReconnectionPolicy: Use ConstantReconnectionPolicy for predictable 
behavior or ExponentialReconnectionPolicy
+//     for gentler recovery. Aggressive settings affect background 
reconnection frequency but don't impact query latency
+//   - ClusterConfig.ReconnectInterval: Set to 30-60 seconds for most cases. 
Shorter intervals provide faster recovery but more traffic
+//   - Timing Predictability: For predictable background recovery timing, 
ensure ClusterConfig.ReconnectInterval exceeds the total
+//     ReconnectionPolicy sequence duration. This prevents Go's ticker from 
buffering/dropping ticks due to "slow receiver"
+//     behavior. You can achieve this by either increasing 
ClusterConfig.ReconnectInterval or reducing ReconnectionPolicy duration
+//     (fewer retries/shorter intervals). The latter approach provides faster 
recovery while maintaining predictable timing
+//   - Monitoring: Enable logging to observe reconnection behavior and tune 
settings
+//
+// # Compression
+//
+// The driver supports Snappy and LZ4 compression of protocol frames.
+//
+// For Snappy compression (via 
[github.com/apache/cassandra-gocql-driver/v2/snappy] package):
+//
+//     import "github.com/apache/cassandra-gocql-driver/v2/snappy"
+//
+//     cluster.Compressor = &snappy.SnappyCompressor{}
+//
+// For LZ4 compression (via [github.com/apache/cassandra-gocql-driver/v2/lz4] 
package):
+//
+//     import "github.com/apache/cassandra-gocql-driver/v2/lz4"
+//
+//     cluster.Compressor = &lz4.LZ4Compressor{}
+//
+// Both compressors use efficient append-like semantics for optimal 
performance and memory usage.
+//
+// # Structured Logging
+//
+// The driver provides structured logging through the [StructuredLogger] 
interface.
+// Built-in integrations are available for popular logging libraries:
+//
+// For Zap logger (via [github.com/apache/cassandra-gocql-driver/v2/gocqlzap] 
package):
+//
+//     import "github.com/apache/cassandra-gocql-driver/v2/gocqlzap"
+//
+//     zapLogger, _ := zap.NewProduction()
+//     cluster.Logger = gocqlzap.NewZapLogger(zapLogger)
+//
+// For Zerolog (via [github.com/apache/cassandra-gocql-driver/v2/gocqlzerolog] 
package):
+//
+//     import "github.com/apache/cassandra-gocql-driver/v2/gocqlzerolog"
+//
+//     zerologLogger := zerolog.New(os.Stdout).With().Timestamp().Logger()
+//     cluster.Logger = gocqlzerolog.NewZerologLogger(&zerologLogger)
+//
+// You can also use the built-in standard library logger:
+//
+//     cluster.Logger = gocql.NewLogger(gocql.LogLevelInfo)
+//
+// # Native Protocol Version 5 Features
+//
+// Native protocol version 5 provides several advanced capabilities:
+//
+// Set keyspace for individual queries (useful for multi-tenant applications):
+//
+//     err := session.Query("SELECT * FROM 
table").SetKeyspace("tenant1").Exec()
+//
+// Target queries to specific nodes (useful for virtual tables in Cassandra 
4.0+):
+//
+//     err := session.Query("SELECT * FROM system_views.settings").
+//             SetHostID("host-uuid").Exec()
+//
+// Use current timestamp override for testing and consistency:
+//
+//     err := session.Query("INSERT INTO table (id, data) VALUES (?, ?)").
+//             WithNowInSeconds(specificTimestamp).
+//             Bind(id, data).Exec()
+//
+// These features are also available on batch operations:
+//
+//     err := session.Batch(LoggedBatch).
+//             Query("INSERT INTO table (id, data) VALUES (?, ?)", id, data).
+//             SetKeyspace("tenant1").
+//             WithNowInSeconds(specificTimestamp).
+//             Exec()

Review Comment:
   done



##########
UPGRADE_GUIDE.md:
##########
@@ -0,0 +1,748 @@
+<!--
+#
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+-->
+
+# GoCQL Major Version Upgrade Guide
+
+This guide helps you migrate between major versions of the GoCQL driver. Each 
major version introduces significant changes that may require code 
modifications.
+
+
+
+## Available Upgrade Paths
+
+- [v1.x → v2.x](#upgrading-from-v1x-to-v2x)
+- [Future version upgrades will be documented here as they become available]
+
+---
+
+## Upgrading from v1.x to v2.x
+
+Version 2.0.0 represents a major overhaul of the GoCQL driver with significant 
API changes, new features, and improvements. This migration requires careful 
planning and testing.
+
+**Important Prerequisites:**
+- **Minimum Go version**: Go 1.19+
+- **Minimum Cassandra version**: 2.1+ (recommended 4.1+ for full feature 
support)
+- **Supported protocol versions**: 3, 4, 5 (Cassandra 2.1+, versions 1 and 2 
are no longer supported)
+
+### Table of Contents
+
+- [Protocol Version / Cassandra Version 
Support](#protocol-version--cassandra-version-support)
+- [Breaking Changes](#breaking-changes)
+  - [Module and Import Changes](#module-and-import-changes)
+  - [Removed Global Functions](#removed-global-functions)
+  - [Session Setter Methods Removed](#session-setter-methods-removed)
+  - [Methods Moved from Query/Batch to 
Iter](#methods-moved-from-querybatch-to-iter)
+  - [Logging System Overhaul](#logging-system-overhaul)
+  - [TimeoutLimit Variable Removal](#timeoutlimit-variable-removal)
+  - [Advanced API Changes](#advanced-api-changes)
+    - [HostInfo Method Visibility Changes](#hostinfo-method-visibility-changes)
+    - [HostSelectionPolicy Interface 
Changes](#hostselectionpolicy-interface-changes)
+    - [ExecutableQuery Interface 
Deprecated](#executablequery-interface-deprecated)
+- [Changes that might lead to runtime 
errors](#changes-that-might-lead-to-runtime-errors)
+  - [PasswordAuthenticator Behavior 
Change](#passwordauthenticator-behavior-change)
+  - [CQL to Go type mapping for inet columns changed in 
MapScan/SliceMap](#cql-to-go-type-mapping-for-inet-columns-changed-in-mapscanslicemap)
+  - [NULL Collections Now Return nil Instead of Empty Collections in 
MapScan/SliceMap](#null-collections-now-return-nil-instead-of-empty-collections-in-mapscanslicemap)
+- [Deprecation Notices](#deprecation-notices)
+  - [Example Migrations](#example-migrations)
+
+---
+
+### Protocol Version / Cassandra Version Support
+
+**Protocol versions 1 and 2 removed:**
+
+gocql v2.x dropped support for old protocol versions. Here's the mapping of 
protocol versions to Cassandra versions:
+
+| Protocol Version | Cassandra Versions | Status in v2.x |
+|------------------|-------------------|----------------|
+| 1 | 1.2 - 2.0 | ❌ **REMOVED** |
+| 2 | 2.0 - 2.2 | ❌ **REMOVED** |
+| 3 | 2.1+ | ✅ **Minimum supported** |
+| 4 | 2.2+ | ✅  |
+| 5 | 4.0+ | ✅ **Latest** |
+
+```go
+// OLD (v1.x) - NO LONGER SUPPORTED
+cluster.ProtoVersion = 1  // ❌ Runtime error during connection
+cluster.ProtoVersion = 2  // ❌ Runtime error during connection 
+
+// NEW (v2.x) - Minimum version 3
+cluster.ProtoVersion = 3  // ✅ Minimum supported (Cassandra 2.1+)
+cluster.ProtoVersion = 4  // ✅ (Cassandra 2.2+)
+cluster.ProtoVersion = 5  // ✅ Latest (Cassandra 4.0+)
+// OR omit for auto-negotiation (recommended)
+```
+
+**Runtime error you'll see:**
+```bash
+gocql: unsupported protocol response version: 1
+# OR
+gocql: unsupported protocol response version: 2
+```
+
+**Migration:** Update your cluster configuration to use protocol version 3 or 
higher, or remove the explicit ProtoVersion setting to use auto-negotiation:
+```go
+// Option 1: Explicit version (minimum 3)
+cluster.ProtoVersion = 3  // For Cassandra 2.1+
+
+// Option 2: Auto-negotiation (recommended)
+cluster := gocql.NewCluster("127.0.0.1")
+// ProtoVersion will be auto-negotiated to the highest supported version
+// This is recommended as it works with any Cassandra 2.1+ version
+```
+
+**Note:** Since gocql v2.x requires Cassandra 2.1+ anyway, most users should 
use auto-negotiation for the best compatibility unless you have a cluster with 
nodes that have different Cassandra versions.
+
+---
+
+### Breaking Changes
+
+#### Module and Package Changes
+
+**CRITICAL: All users must update import paths**
+
+The module has been moved to the Apache Software Foundation with a new import 
path:
+
+```go
+// OLD (v1.x)
+import "github.com/gocql/gocql"
+
+// NEW (v2.x)
+import "github.com/apache/cassandra-gocql-driver/v2"
+```
+
+**Compressor modules converted to packages:**
+
+The Snappy and LZ4 compressors have been reorganized from separate modules 
into packages within the main driver:
+
+```go
+// OLD (v1.x) - Snappy was part of main module
+cluster.Compressor = &gocql.SnappyCompressor{}
+
+// OLD (v1.x) - LZ4 was a separate module  
+import "github.com/gocql/gocql/lz4"
+
+// NEW (v2.x) - Both are now packages within the main module
+import "github.com/apache/cassandra-gocql-driver/v2/snappy"
+import "github.com/apache/cassandra-gocql-driver/v2/lz4"
+
+cluster.Compressor = &snappy.SnappyCompressor{}  // ✅ New package syntax
+cluster.Compressor = &lz4.LZ4Compressor{}        // ✅ New package syntax
+```
+
+**HostPoolHostPolicy moved to hostpool package:**
+
+The `HostPoolHostPolicy` function has been moved from the main gocql package 
to the hostpool package:
+
+```go
+// OLD (v1.x) - COMPILATION ERROR in v2.x
+cluster.PoolConfig.HostSelectionPolicy = 
gocql.HostPoolHostPolicy(hostpool.New(nil))  // ❌ undefined: 
gocql.HostPoolHostPolicy
+
+// NEW (v2.x) - Import from hostpool package
+import "github.com/apache/cassandra-gocql-driver/v2/hostpool"
+cluster.PoolConfig.HostSelectionPolicy = 
hostpool.HostPoolHostPolicy(hostpool.New(nil))
+```
+
+**All import path changes:**
+```go
+// OLD (v1.x)
+import "github.com/gocql/gocql"
+import "github.com/gocql/gocql/lz4"  // Was separate module
+
+// NEW (v2.x)
+import "github.com/apache/cassandra-gocql-driver/v2"
+import "github.com/apache/cassandra-gocql-driver/v2/snappy"  // Now package
+import "github.com/apache/cassandra-gocql-driver/v2/lz4"     // Now package
+import "github.com/apache/cassandra-gocql-driver/v2/hostpool"  // For 
HostPoolHostPolicy
+```
+
+#### Removed Global Functions
+
+**`NewBatch()` function removed (was deprecated):**
+```go
+// OLD (v1.x) - COMPILATION ERROR in v2.x
+batch := gocql.NewBatch(gocql.LoggedBatch)  // ❌ undefined: gocql.NewBatch
+```
+
+**Compilation error you'll see:**
+```bash
+./main.go:42:10: undefined: gocql.NewBatch
+```
+
+**Migration:**
+```go
+// NEW (v2.x) - Use fluent API
+batch := session.Batch(gocql.LoggedBatch)
+```
+
+**`MustParseConsistency()` function removed (was deprecated):**
+```go
+// OLD (v1.x) - COMPILATION ERROR in v2.x
+cons, err := gocql.MustParseConsistency("quorum")  // ❌ undefined: 
gocql.MustParseConsistency
+```
+
+**Compilation error you'll see:**
+```bash
+./main.go:45:11: undefined: gocql.MustParseConsistency
+```
+
+**Migration:**
+```go
+// NEW (v2.x) - Use ParseConsistency (panics on error instead of returning 
unused error)
+cons := gocql.ParseConsistency("quorum")  // ✅ Direct panic on invalid input
+```
+
+#### Session Setter Methods Removed
+
+**Session setter methods removed for immutability:**
+```go
+// OLD (v1.x) - COMPILATION ERRORS in v2.x
+session.SetTrace(tracer)       // ❌ session.SetTrace undefined
+session.SetConsistency(gocql.Quorum)  // ❌ session.SetConsistency undefined  
+session.SetPageSize(1000)      // ❌ session.SetPageSize undefined
+session.SetPrefetch(0.25)      // ❌ session.SetPrefetch undefined
+```
+
+**Compilation errors you'll see:**
+```bash
+./main.go:45:9: session.SetTrace undefined (type *gocql.Session has no method 
SetTrace)
+./main.go:46:9: session.SetConsistency undefined (type *gocql.Session has no 
method SetConsistency)
+./main.go:47:9: session.SetPageSize undefined (type *gocql.Session has no 
method SetPageSize)
+./main.go:48:9: session.SetPrefetch undefined (type *gocql.Session has no 
method SetPrefetch)
+```
+
+**Migration options:**
+```go
+// NEW (v2.x) - Option 1: Set defaults via ClusterConfig
+cluster := gocql.NewCluster("127.0.0.1")
+cluster.Consistency = gocql.Quorum
+cluster.PageSize = 1000
+cluster.NextPagePrefetch = 0.25
+cluster.Tracer = tracer
+
+// NEW (v2.x) - Option 2: Configure per Query/Batch
+query := session.Query("SELECT ...").
+    Trace(tracer).
+    Consistency(gocql.Quorum).
+    PageSize(1000).
+    Prefetch(0.25)
+
+batch := session.Batch(gocql.LoggedBatch).
+    Trace(tracer).
+    Consistency(gocql.Quorum)
+```
+
+#### Methods Moved from Query/Batch to Iter
+
+**Execution-specific methods moved from Query/Batch objects to Iter objects:**
+```go
+// OLD (v1.x) - COMPILATION ERRORS in v2.x
+query := session.Query("SELECT * FROM users WHERE id = ?", userID)
+iter := query.Iter()
+// ... process results
+attempts := query.Attempts()    // ❌ query.Attempts undefined
+latency := query.Latency()      // ❌ query.Latency undefined
+host := query.Host()            // ❌ query.Host undefined
+```
+
+**Compilation errors you'll see:**
+```bash
+./main.go:52:12: query.Attempts undefined (type *gocql.Query has no method 
Attempts)
+./main.go:53:11: query.Latency undefined (type *gocql.Query has no method 
Latency)
+./main.go:54:8: query.Host undefined (type *gocql.Query has no method Host)
+```
+
+**Migration:**
+```go
+// NEW (v2.x) - Methods available on Iter
+query := session.Query("SELECT * FROM users WHERE id = ?", userID)
+iter := query.Iter()
+// ... process results
+attempts := iter.Attempts()     // ✅ Now available on Iter
+latency := iter.Latency()       // ✅ Now available on Iter  
+host := iter.Host()             // ✅ Now available on Iter
+defer iter.Close()
+```
+
+**Why this change?** Methods moved to Iter because they represent 
execution-specific data that only exists after a query is executed, not 
properties of the query definition itself.
+
+#### Logging System Overhaul
+
+**Complete replacement of logging interface - COMPILATION ERRORS:**
+```go
+// OLD (v1.x) - StdLogger interface - NO LONGER EXISTS
+type StdLogger interface {                   // ❌ Interface removed
+    Print(v ...interface{})                  // ❌ Interface removed
+    Printf(format string, v ...interface{})  // ❌ Interface removed  
+    Println(v ...interface{})                // ❌ Interface removed
+}
+
+// Trying to use old interface causes compilation error
+cluster.Logger = myOldLogger  // ❌ cannot use myOldLogger (type StdLogger) as 
StructuredLogger
+```
+
+**Compilation error you'll see:**
+```bash
+./main.go:67:16: cannot use myOldLogger (type StdLogger) as type 
StructuredLogger in assignment:
+       StdLogger does not implement StructuredLogger (missing Debug method)
+```
+
+**Migration - NEW StructuredLogger interface:**
+```go
+// NEW (v2.x) - StructuredLogger interface
+type StructuredLogger interface {
+    Debug(msg string, fields ...Field)
+    Info(msg string, fields ...Field)
+    Warn(msg string, fields ...Field)  
+    Error(msg string, fields ...Field)
+}
+```
+
+**Migration options:**
+```go
+// Option 1: Use built-in default logger
+cluster.Logger = gocql.NewLogger(gocql.LogLevelInfo)
+
+// Option 2: Use provided adapters
+cluster.Logger = gocqlzap.New(zapLogger)      // For Zap
+cluster.Logger = gocqlzerolog.New(zeroLogger) // For Zerolog
+
+// Option 3: Implement StructuredLogger interface
+type MyStructuredLogger struct{}
+func (l MyStructuredLogger) Debug(msg string, fields ...gocql.Field) { /* ... 
*/ }
+func (l MyStructuredLogger) Info(msg string, fields ...gocql.Field)  { /* ... 
*/ }
+func (l MyStructuredLogger) Warn(msg string, fields ...gocql.Field)  { /* ... 
*/ }
+func (l MyStructuredLogger) Error(msg string, fields ...gocql.Field) { /* ... 
*/ }
+
+cluster.Logger = MyStructuredLogger{}
+```
+
+For comprehensive StructuredLogger documentation, see 
[pkg.go.dev/github.com/apache/cassandra-gocql-driver/v2#hdr-Structured_Logging](https://pkg.go.dev/github.com/apache/cassandra-gocql-driver/v2#hdr-Structured_Logging).
+
+#### Advanced API Changes
+
+**⚠️ This section only applies to advanced users who implement custom 
interfaces ⚠️**
+
+*Most users can skip this section. These changes only affect you if you've 
implemented custom `HostSelectionPolicy`, `RetryPolicy`, or other advanced 
driver interfaces.*
+
+##### HostInfo Method Visibility Changes
+
+**HostInfo method visibility changes - COMPILATION ERRORS:**
+
+Several HostInfo methods have been removed or made private:
+
+```go
+// OLD (v1.x) - COMPILATION ERRORS in v2.x
+host.SetConnectAddress(addr)    // ❌ method undefined
+host.SetHostID(id)              // ❌ method undefined (became private 
setHostID)
+
+// Runtime behavior changes:
+addr := host.ConnectAddress()   // ⚠️ No longer panics on invalid address, 
driver validates before creating the object
+```
+
+**Compilation errors you'll see:**
+```bash
+./main.go:45:9: host.SetConnectAddress undefined (type *gocql.HostInfo has no 
method SetConnectAddress)
+./main.go:46:9: host.SetHostID undefined (type *gocql.HostInfo has no method 
SetHostID)
+```
+
+**Migration:**
+```go
+// OLD (v1.x) - Setting host connection address
+host.SetConnectAddress(net.ParseIP("192.168.1.100"))
+
+// NEW (v2.x) - Use AddressTranslator instead
+cluster.AddressTranslator = gocql.AddressTranslatorFunc(func(addr net.IP, port 
int) (net.IP, int) {
+    // Translate addresses here
+    return net.ParseIP("192.168.1.100"), port
+})
+```
+
+##### HostSelectionPolicy Interface Changes
+
+**HostSelectionPolicy.Pick() method signature changed:**
+```go
+// OLD (v1.x) - COMPILATION ERROR in v2.x
+type HostSelectionPolicy interface {
+    Pick(qry ExecutableQuery) NextHost  // ❌ ExecutableQuery no longer exists
+}
+```
+
+**Compilation error you'll see:**
+```bash
+./main.go:25:17: undefined: ExecutableQuery
+```
+
+**Migration:**
+```go
+// NEW (v2.x) - Use ExecutableStatement interface
+type HostSelectionPolicy interface {
+    Pick(stmt ExecutableStatement) NextHost  // ✅ New interface
+}
+```
+
+##### ExecutableQuery Interface Deprecated
+
+**ExecutableQuery interface deprecated and replaced:**
+```go
+// OLD (v1.x) - DEPRECATED in v2.x
+type ExecutableQuery interface {
+    // ... methods
+}
+
+// NEW (v2.x) - Replacement interfaces
+type ExecutableStatement interface {
+    GetRoutingKey() ([]byte, error)
+    Keyspace() string
+    Table() string
+    IsIdempotent() bool
+    GetHostID() string
+    Statement() Statement
+}
+
+// implemented by Query and Batch
+type Statement interface {
+    Iter() *Iter
+    IterContext(ctx context.Context) *Iter
+    Exec() error
+    ExecContext(ctx context.Context) error
+}
+```
+
+**Migration for custom HostSelectionPolicy implementations:**
+```go
+// OLD (v1.x)
+func (p *MyCustomPolicy) Pick(qry ExecutableQuery) NextHost {
+    routingKey, _ := qry.GetRoutingKey()
+    keyspace := qry.Keyspace()
+    // Access to internal query properties...
+    // ...
+}
+
+// NEW (v2.x) - Core methods available on ExecutableStatement
+func (p *MyCustomPolicy) Pick(stmt ExecutableStatement) NextHost {
+    routingKey, _ := stmt.GetRoutingKey()  // ✅ Same method
+    keyspace := stmt.Keyspace()            // ✅ Same method
+    
+    // For additional properties, type cast the underlying Statement
+    switch s := stmt.Statement().(type) {
+    case *Query:
+        // Access Query-specific properties (READ-ONLY)
+        consistency := s.GetConsistency()
+        pageSize := s.PageSize()
+        // ... other Query methods
+    case *Batch:
+        // Access Batch-specific properties (READ-ONLY)
+        batchType := s.Type
+        consistency := s.GetConsistency()
+        // ... other Batch methods
+    }
+    
+    // ⚠️ WARNING: Only READ from the statement - do NOT modify it!
+    // Modifying the statement in HostSelectionPolicy will not affect the 
current request execution,
+    // but it WILL modify the original Query/Batch object that the user has a 
reference to.
+    // Since Query/Batch are not thread-safe, this can cause race conditions 
and unexpected behavior.
+    
+    // ...
+}
+```
+
+**Impact:** If you have implemented custom `HostSelectionPolicy`, 
`RetryPolicy`, or other interfaces that accept `ExecutableQuery`, you'll need 
to update the parameter type to `ExecutableStatement`. The available methods 
remain mostly the same.
+
+
+
+#### TimeoutLimit Variable Removal
+
+**TimeoutLimit variable removed - COMPILATION ERROR:**
+
+The deprecated `TimeoutLimit` global variable has been removed:
+
+```go
+// OLD (v1.x) - COMPILATION ERROR in v2.x
+gocql.TimeoutLimit = 5  // ❌ undefined: gocql.TimeoutLimit
+```
+
+**Compilation error you'll see:**
+```bash
+./main.go:45:9: undefined: gocql.TimeoutLimit
+```
+
+**Behavior after fix:**
+- **v1.x default**: `TimeoutLimit = 0` meant timeouts never closed connections
+- **v2.x**: Behavior will match v1.x default - timeouts never close connections
+- **Impact**: Only affects users who explicitly set `TimeoutLimit > 0`
+
+**Migration:**
+```go
+// OLD (v1.x) - Setting timeout limit (deprecated approach)
+gocql.TimeoutLimit = 5  // Close connection after 5 timeouts
+
+// NEW (v2.x) - No direct replacement recommended
+// Remove any code that sets gocql.TimeoutLimit
+```
+
+**Recommended Approach:**
+We do not recommend a specific code-level migration path for `TimeoutLimit` 
because the old approach was fundamentally flawed. Instead, focus on proper 
operational practices:
+
+**Better Solution: Infrastructure Monitoring & Management**
+- **Monitor Cassandra node health** using proper metrics (CPU, memory, disk 
I/O, GC pauses)
+- **Shut down unhealthy nodes** rather than trying to work around them at the 
client level
+- **Use proper alerting** on node performance metrics
+- **Address root causes** of node problems rather than masking symptoms
+
+**Why TimeoutLimit was deprecated:**
+In real-world scenarios, if a Cassandra node is unhealthy enough to cause 
timeouts, simply closing and reopening connections won't solve the underlying 
problem. The node will likely continue causing latency issues and other 
performance problems. The correct solution is to identify and fix unhealthy 
nodes at the infrastructure level.
+
+---
+
+### Changes that might lead to runtime errors
+
+#### PasswordAuthenticator Behavior Change
+
+**PasswordAuthenticator now allows any server authenticator by default - 
POTENTIAL SECURITY ISSUE:**
+
+In v1.x, PasswordAuthenticator had a hardcoded list of approved authenticators 
and would reject connections to servers using other authenticators.
+
+In v2.x, PasswordAuthenticator will authenticate with **any** authenticator 
provided by the server unless you explicitly restrict it:
+
+```go
+// v1.x behavior: Only allowed specific authenticators by default
+// v2.x behavior: Allows ANY authenticator by default
+
+// OLD (v1.x) - Automatic rejection of non-standard authenticators
+cluster.Authenticator = PasswordAuthenticator{
+    Username: "user",
+    Password: "password",
+    // Automatically rejected servers with non-standard authenticators
+}
+
+// NEW (v2.x) - To maintain v1.x security behavior:
+cluster.Authenticator = PasswordAuthenticator{
+    Username: "user",
+    Password: "password",
+    AllowedAuthenticators: []string{
+        "org.apache.cassandra.auth.PasswordAuthenticator",
+        // Add other allowed authenticators here
+    },
+}
+
+// NEW (v2.x) - To allow any authenticator (new default behavior):
+cluster.Authenticator = PasswordAuthenticator{
+    Username: "user", 
+    Password: "password",
+    // AllowedAuthenticators: nil, // or empty slice allows any
+}
+```
+
+**Security Impact:** 
+- **v1.x**: Connections to servers with non-standard authenticators were 
automatically rejected
+- **v2.x**: Same code will now successfully authenticate with any server 
authenticator
+- **Risk**: May connect to servers with weaker or unexpected authentication 
mechanisms
+
+**Migration:** If security is a concern, explicitly set 
`AllowedAuthenticators` to maintain the restrictive v1.x behavior.
+
+#### CQL to Go type mapping for inet columns changed in MapScan/SliceMap
+
+**inet columns now return `net.IP` instead of `string` in MapScan/SliceMap - 
RUNTIME PANIC:**
+
+In v1.x, `MapScan()` and `SliceMap()` returned inet columns as `string` 
values. In v2.x, they now return `net.IP` values, causing runtime panics for 
existing type assertions.
+
+```go
+// v1.x: inet columns returned as string
+result, _ := session.Query("SELECT inet_col FROM table").Iter().SliceMap()
+ip := result[0]["inet_col"].(string)  // ✅ Worked in v1.x
+
+// v2.x: Same code causes runtime panic
+result, _ := session.Query("SELECT inet_col FROM table").Iter().SliceMap()
+ip := result[0]["inet_col"].(string)  // ❌ PANIC: interface conversion: 
interface {} is net.IP, not string
+```
+
+**Runtime panic you'll see:**
+```bash
+panic: interface conversion: interface {} is net.IP, not string
+```
+
+**Migration:**
+```go
+// NEW (v2.x) - Use net.IP type assertion
+result, _ := session.Query("SELECT inet_col FROM table").Iter().SliceMap()
+ip := result[0]["inet_col"].(net.IP)  // ✅ Correct type for v2.x
+
+// Convert to string if needed
+ipString := ip.String()
+
+// Or use type switching for compatibility during migration
+switch v := result[0]["inet_col"].(type) {
+case net.IP:
+    ipString := v.String()  // v2.x behavior
+case string:
+    ipString := v          // v1.x behavior (shouldn't happen in v2.x)
+}
+```
+
+**Impact:** This only affects code using `MapScan()` and `SliceMap()` with 
inet columns. Direct `Scan()` calls are not affected since they require 
explicit type specification.
+
+#### NULL Collections Now Return nil Instead of Empty Collections in 
MapScan/SliceMap
+
+**NULL collections (lists, sets, maps) now return `nil` instead of empty 
collections - POTENTIAL RUNTIME PANIC:**
+
+In v1.x, `MapScan()` and `SliceMap()` returned NULL collections as empty 
slices/maps. In v2.x, they now return `nil` slices/maps, which can cause panics 
in code that assumes non-nil collections.
+
+```go
+// v1.x: NULL collections returned as empty
+result, _ := session.Query("SELECT null_list_col FROM table").Iter().SliceMap()
+list := result[0]["null_list_col"].([]string)
+// list was []string{} (empty slice with len=0, cap=0)
+fmt.Println(list == nil)     // false
+fmt.Println(len(list) == 0)  // true
+
+// v2.x: NULL collections now return nil
+result, _ := session.Query("SELECT null_list_col FROM table").Iter().SliceMap()
+list := result[0]["null_list_col"].([]string)
+// list is []string(nil) (nil slice)
+fmt.Println(list == nil)     // true
+fmt.Println(len(list) == 0)  // true (len of nil slice is 0)
+```
+
+**Code that may cause runtime panics:**
+```go
+// ❌ BREAKS: Code checking for non-nil before processing
+if myList != nil && len(myList) > 0 {  // Condition now fails for NULL 
collections
+    processItems(myList)  // Never called for NULL collections
+}
+
+// ❌ PANICS: Operations that don't handle nil slices
+copy(destination, myList)  // Panics if myList is nil
+myList[0] = "value"        // Panics if myList is nil
+
+// ❌ BREAKS: Code assuming initialized slice for direct assignment
+myList[index] = newValue   // Index assignment panics on nil slice
+```
+
+**Migration - Use nil-safe patterns:**
+```go
+// ✅ SAFE: Check length only (works for both nil and empty slices)
+if len(myList) > 0 {           // len of nil slice is 0
+    processItems(myList)
+}
+
+// ✅ SAFE: Use append (handles nil slices)
+myList = append(myList, item)  // append works with nil slices
+
+// ✅ SAFE: Range over slices (safe with nil)
+for _, item := range myList {  // range over nil slice is safe (no iterations)
+    processItem(item)
+}
+
+// ✅ SAFE: Initialize before direct assignment
+if myList == nil {
+    myList = make([]string, desiredLength)
+}
+myList[index] = newValue
+
+// ✅ SAFE: Use copy with proper nil check
+if len(myList) > 0 {
+    copy(destination, myList)
+}
+```
+
+**Why this change was made:** 
+- **Semantic correctness**: NULL ≠ empty. A NULL collection should be `nil`, 
not an empty collection
+- **Memory efficiency**: `nil` slices use less memory than empty slices
+- **Consistency**: Direct `Scan()` calls already behaved this way
+
+**Impact:** This affects code using `MapScan()` and `SliceMap()` with 
collection columns (lists, sets, maps) that explicitly checks for `nil` or 
performs operations that don't handle `nil` slices/maps safely.
+
+---
+
+### Deprecation Notices
+
+The following features are deprecated but still functional in v2.x. **These 
should not be used in new code.** Plan to migrate away from these before v3.x:

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]


Reply via email to