This is an automated email from the ASF dual-hosted git repository.

ipolyzos pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/fluss.git


The following commit(s) were added to refs/heads/main by this push:
     new dccccee54 [docs]fix typos, grammar, casing, and spacing in multiple 
documents (#1474)
dccccee54 is described below

commit dccccee54064f9db5578eef1ff4dd8e62bdbd716
Author: Zmm <[email protected]>
AuthorDate: Wed Aug 6 14:57:27 2025 +0800

    [docs]fix typos, grammar, casing, and spacing in multiple documents (#1474)
---
 website/docs/engine-flink/datastream.mdx           |  4 +--
 website/docs/engine-flink/getting-started.md       |  8 ++---
 website/docs/engine-flink/lookups.md               |  4 +--
 website/docs/engine-flink/reads.md                 |  4 +--
 website/docs/engine-flink/writes.md                | 10 +++---
 .../docs/install-deploy/deploying-with-docker.md   |  2 +-
 website/docs/maintenance/configuration.md          | 42 +++++++++++-----------
 website/docs/maintenance/observability/logging.md  | 12 +++----
 .../maintenance/observability/monitor-metrics.md   |  2 +-
 .../docs/maintenance/observability/quickstart.md   |  4 +--
 website/docs/security/authentication.md            | 10 +++---
 website/docs/security/authorization.md             |  8 ++---
 website/docs/security/overview.md                  | 10 +++---
 website/docs/table-design/data-types.md            | 42 +++++++++++-----------
 14 files changed, 81 insertions(+), 81 deletions(-)

diff --git a/website/docs/engine-flink/datastream.mdx 
b/website/docs/engine-flink/datastream.mdx
index 193169e09..02bcbadbc 100644
--- a/website/docs/engine-flink/datastream.mdx
+++ b/website/docs/engine-flink/datastream.mdx
@@ -7,7 +7,7 @@ sidebar_position: 6
 ## Overview
 The Fluss DataStream Connector for Apache Flink provides a Flink DataStream 
source implementation for reading data from Fluss tables and a Flink DataStream 
sink implementation for writing data to Fluss tables. It allows you to 
seamlessly integrate Fluss tables with Flink's DataStream API, enabling you to 
process data from Fluss in your Flink applications.
 
-Key features of the Fluss Datastream Connector include:
+Key features of the Fluss DataStream Connector include:
 * Reading from both primary key tables and log tables
 * Support for projection pushdown to select specific fields
 * Flexible offset initialization strategies
@@ -191,7 +191,7 @@ DataStreamSource<RowData> stream = env.fromSource(
 // For INSERT, UPDATE_BEFORE, UPDATE_AFTER, DELETE events
 ```
 
-**Note:** If you are mapping from `RowData` to your pojos object, you might 
want to include the row kind operation.
+**Note:** If you are mapping from `RowData` to your POJO objects, you might 
want to include the row kind operation.
 
 #### Reading from a Log Table
 When reading from a log table, all records are emitted with `RowKind.INSERT` 
since log tables only support appends.
diff --git a/website/docs/engine-flink/getting-started.md 
b/website/docs/engine-flink/getting-started.md
index 7c72ebf20..d58222c9e 100644
--- a/website/docs/engine-flink/getting-started.md
+++ b/website/docs/engine-flink/getting-started.md
@@ -23,7 +23,7 @@ For Flink's Table API, Fluss supports the following features:
 | Feature support                                   | Flink | Notes            
                      |
 
|---------------------------------------------------|-------|----------------------------------------|
 | [SQL create catalog](ddl.md#create-catalog)       | ✔️    |                  
                      |
-| [SQl create database](ddl.md#create-database)     | ✔️    |                  
                      |
+| [SQL create database](ddl.md#create-database)     | ✔️    |                  
                      |
 | [SQL drop database](ddl.md#drop-database)         | ✔️    |                  
                      |
 | [SQL create table](ddl.md#create-table)           | ✔️    |                  
                      |
 | [SQL create table like](ddl.md#create-table-like) | ✔️    |                  
                      |
@@ -43,7 +43,7 @@ For Flink's DataStream API, you can see [DataStream 
API](docs/engine-flink/datas
 ## Preparation when using Flink SQL Client
 - **Download Flink**
 
-Flink runs on all UNIX-like environments, i.e. Linux, Mac OS X, and Cygwin 
(for Windows).
+Flink runs on all UNIX-like environments, i.e., Linux, Mac OS X, and Cygwin 
(for Windows).
 If you haven’t downloaded Flink, you can download [the binary 
release](https://flink.apache.org/downloads.html) of Flink, then extract the 
archive with the following command.
 ```shell
 tar -xzf flink-1.20.1-bin-scala_2.12.tgz
@@ -70,7 +70,7 @@ You should be able to navigate to the web UI at 
[localhost:8081](http://localhos
 ```shell
 ps aux | grep flink
 ```
-- **Start a sql client**
+- **Start a SQL Client**
 
 To quickly stop the cluster and all running components, you can use the 
provided script:
 ```shell
@@ -92,7 +92,7 @@ CREATE CATALOG fluss_catalog WITH (
    you should start the Fluss server first. See [Deploying 
Fluss](install-deploy/overview.md#how-to-deploy-fluss)
    for how to build a Fluss cluster.
    Here, it is assumed that there is a Fluss cluster running on your local 
machine and the CoordinatorServer port is 9123.
-2. The` bootstrap.servers` configuration is used to discover all nodes within 
the Fluss cluster. It can be set with one or more (up to three) Fluss server 
addresses (either CoordinatorServer or TabletServer) separated by commas.
+2. The`bootstrap.servers` configuration is used to discover all nodes within 
the Fluss cluster. It can be set with one or more (up to three) Fluss server 
addresses (either CoordinatorServer or TabletServer) separated by commas.
 :::
 
 ## Creating a Table
diff --git a/website/docs/engine-flink/lookups.md 
b/website/docs/engine-flink/lookups.md
index 69f287fab..f4e5abab1 100644
--- a/website/docs/engine-flink/lookups.md
+++ b/website/docs/engine-flink/lookups.md
@@ -232,7 +232,7 @@ Continuing from the previous prefix lookup example, if our 
dimension table is a
 ```sql title="Flink SQL"
 -- primary keys are (c_custkey, c_nationkey, dt)
 -- bucket key is (c_custkey)
-CREATE TABLE `fluss_catalog`.`my_db`.`customer_partitioned_with_bukcet_key` (
+CREATE TABLE `fluss_catalog`.`my_db`.`customer_partitioned_with_bucket_key` (
   `c_custkey` INT NOT NULL,
   `c_name` STRING NOT NULL,
   `c_address` STRING NOT NULL,
@@ -259,7 +259,7 @@ INSERT INTO prefix_lookup_join_sink
 SELECT `o`.`o_orderkey`, `o`.`o_totalprice`, `c`.`c_name`, `c`.`c_address`
 FROM 
 (SELECT `orders_with_dt`.*, proctime() AS ptime FROM `orders_with_dt`) AS `o`
-LEFT JOIN `customer_partitioned_with_bukcet_key`
+LEFT JOIN `customer_partitioned_with_bucket_key`
 FOR SYSTEM_TIME AS OF `o`.`ptime` AS `c`
 ON `o`.`o_custkey` = `c`.`c_custkey` AND  `o`.`o_dt` = `c`.`dt`;
 
diff --git a/website/docs/engine-flink/reads.md 
b/website/docs/engine-flink/reads.md
index bc34e4092..09c65e2ab 100644
--- a/website/docs/engine-flink/reads.md
+++ b/website/docs/engine-flink/reads.md
@@ -7,12 +7,12 @@ sidebar_position: 4
 # Flink Reads
 Fluss supports streaming and batch read with [Apache 
Flink](https://flink.apache.org/)'s SQL & Table API. Execute the following SQL 
command to switch execution mode from streaming to batch, and vice versa:
 ```sql title="Flink SQL"
--- Execute the flink job in streaming mode for current session context
+-- Execute the Flink job in streaming mode for current session context
 SET 'execution.runtime-mode' = 'streaming';
 ```
 
 ```sql title="Flink SQL"
--- Execute the flink job in batch mode for current session context
+-- Execute the Flink job in batch mode for current session context
 SET 'execution.runtime-mode' = 'batch';
 ```
 
diff --git a/website/docs/engine-flink/writes.md 
b/website/docs/engine-flink/writes.md
index cc169e1f4..29af716fd 100644
--- a/website/docs/engine-flink/writes.md
+++ b/website/docs/engine-flink/writes.md
@@ -15,7 +15,7 @@ Fluss primary key tables can accept all types of messages 
(`INSERT`, `UPDATE_BEF
 They support both streaming and batch modes and are compatible with 
primary-key tables (for upserting data) as well as log tables (for appending 
data).
 
 ### Appending Data to the Log Table
-#### Create a Log table.
+#### Create a Log Table.
 ```sql title="Flink SQL"
 CREATE TABLE log_table (
   order_id BIGINT,
@@ -25,7 +25,7 @@ CREATE TABLE log_table (
 );
 ```
 
-#### Insert data into the Log table.
+#### Insert Data into the Log Table.
 ```sql title="Flink SQL"
 CREATE TEMPORARY TABLE source (
   order_id BIGINT,
@@ -91,7 +91,7 @@ SELECT shop_id, user_id, num_orders FROM source;
 
 Fluss supports deleting data for primary-key tables in batch mode via `DELETE 
FROM` statement. Currently, only single data deletions based on the primary key 
are supported.
 
-* the primary key table
+* the Primary Key Table
 ```sql title="Flink SQL"
 -- DELETE statement requires batch mode
 SET 'execution.runtime-mode' = 'batch';
@@ -99,7 +99,7 @@ SET 'execution.runtime-mode' = 'batch';
 
 ```sql title="Flink SQL"
 -- The condition must include all primary key equality conditions.
-DELETE FROM pk_table WHERE shop_id = 10000 and user_id = 123456;
+DELETE FROM pk_table WHERE shop_id = 10000 AND user_id = 123456;
 ```
 
 ## UPDATE
@@ -112,5 +112,5 @@ SET execution.runtime-mode = batch;
 
 ```sql title="Flink SQL"
 -- The condition must include all primary key equality conditions.
-UPDATE pk_table SET total_amount = 2 WHERE shop_id = 10000 and user_id = 
123456;
+UPDATE pk_table SET total_amount = 2 WHERE shop_id = 10000 AND user_id = 
123456;
 ```
\ No newline at end of file
diff --git a/website/docs/install-deploy/deploying-with-docker.md 
b/website/docs/install-deploy/deploying-with-docker.md
index ce97598ad..c78a6a756 100644
--- a/website/docs/install-deploy/deploying-with-docker.md
+++ b/website/docs/install-deploy/deploying-with-docker.md
@@ -316,7 +316,7 @@ volumes:
 
 ### Launch the components
 
-Save the `docker-compose.yaml` script and execute the `docker compose up -d` 
command in the same directory
+Save the `docker-compose.yml` script and execute the `docker compose up -d` 
command in the same directory
 to create the cluster.
 
 Run the below command to check the container status:
diff --git a/website/docs/maintenance/configuration.md 
b/website/docs/maintenance/configuration.md
index 9cc996085..4515082fe 100644
--- a/website/docs/maintenance/configuration.md
+++ b/website/docs/maintenance/configuration.md
@@ -26,25 +26,25 @@ during the Fluss cluster working.
 
 ## Common
 
-| Option                                           | Type       | Default      
                                                                                
                                                                         | 
Description                                                                     
                                                                                
                                                                                
                   [...]
-|--------------------------------------------------|------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 [...]
+| Option                                           | Type       | Default      
                                                                                
                                                                         | 
Description                                                                     
                                                                                
                                                                                
                   [...]
+|--------------------------------------------------|------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 [...]
 | bind.listeners           | String  | (None)  | The network address and port 
to which the server binds for accepting connections. This defines the interface 
and port where the server will listen for incoming requests. The format is 
`{listener_name}://{host}:{port}`, and multiple addresses can be specified, 
separated by commas. Use `0.0.0.0` for the `host` to bind to all available 
interfaces which is dangerous on production and not suggested for production 
usage. The `listener_name` serv [...]
 | advertised.listeners     | String  | (None)  | The externally advertised 
address and port for client connections. Required in distributed environments 
when the bind address is not publicly reachable. Format matches 
`bind.listeners` (`{listener_name}://{host}:{port}`). Defaults to the value of 
`bind.listeners` if not explicitly configured.                                  
                                                                                
                                     [...]
 | internal.listener.name   | String  | FLUSS   | The listener for server 
internal communication.                                                         
                                                                                
                                                                                
                                                                                
                                                                                
                    [...]
 | security.protocol.map                      | Map        |                 | 
A map defining the authentication protocol for each listener. The format is 
`listenerName1:protocol1,listenerName2:protocol2`, e.g., 
`INTERNAL:PLAINTEXT,CLIENT:GSSAPI`. Each listener can be associated with a 
specific authentication protocol. Listeners not included in the map will use 
PLAINTEXT by default, which does not require authentication.                    
                                                  [...]
 | `security.${protocol}.*`                         | String     | (none)       
                                                                                
                                                                         | 
Protocol-specific configuration properties. For example, 
security.sasl.jaas.config for SASL authentication settings.                     
                                                                                
                                          [...]
-| default.bucket.number                            | Integer    | 1            
                                                                                
                                                                         | The 
default number of buckets for a table in Fluss cluster. It's a cluster-level 
parameter and all the tables without specifying bucket number in the cluster 
will use the value as the bucket number.                                        
                     [...]
-| default.replication.factor                       | Integer    | 1            
                                                                                
                                                                         | The 
default replication factor for the log of a table in Fluss cluster. It's a 
cluster-level parameter, and all the tables without specifying replication 
factor in the cluster will use the value as replication factor.                 
                         [...]
-| remote.data.dir                                  | String     | (None)       
                                                                                
                                                                         | The 
directory used for storing the kv snapshot data files and remote log for log 
tiered storage in a Fluss supported filesystem.                                 
                                                                                
                  [...]
-| remote.fs.write-buffer-size                      | MemorySize | 4kb          
                                                                                
                                                                         | The 
default size of the write buffer for writing the local files to remote file 
systems.                                                                        
                                                                                
                   [...]
-| plugin.classloader.parent-first-patterns.default | String     | 
java.,<br/>com.alibaba.fluss.,<br/>javax.annotation.,<br/>org.slf4j,<br/>org.apache.log4j,<br/>org.apache.logging,<br/>org.apache.commons.logging,<br/>ch.qos.logback
 | A (semicolon-separated) list of patterns that specifies which classes should 
always be resolved through the plugin parent ClassLoader first. A pattern is a 
simple prefix that is checked against the fully qualified class name. This 
setting should generally no [...]
-| auto-partition.check.interval                    | Duration   | 10min        
                                                                                
                                                                         | The 
interval of auto partition check. The default value is 10 minutes.              
                                                                                
                                                                                
               [...]
-| max.partition.num                                | Integer    | 1000         
                                                                                
                                                                         | 
Limits the maximum number of partitions that can be created for a partitioned 
table to avoid creating too many partitions.                                    
                                                                                
                     [...]
-| max.bucket.num                                   | Integer    | 128000       
                                                                                
                                                                         | The 
maximum number of buckets that can be created for a table. The default value is 
128000                                                                          
                                                                                
               [...]
-| acl.notification.expiration-time                 | Duration   | 15min        
                                                                                
                                                                         | The 
duration for which ACL notifications are valid before they expire. This 
configuration determines the time window during which an ACL notification is 
considered active. After this duration, the notification will no longer be 
valid and will be discarded. T [...]
-| authorizer.enabled                               | Boolean    | false        
                                                                                
                                                                         | 
Specifies whether to enable the authorization feature. If enabled, access 
control is enforced based on the authorization rules defined in the 
configuration. If disabled, all operations and resources are accessible to all 
users.                                [...]
-| authorizer.type                                  | String     | default      
                                                                                
                                                                         | 
Specifies the type of authorizer to be used for access control. This value 
corresponds to the identifier of the authorization plugin. The default value is 
`default`, which indicates the built-in authorizer implementation. Custom 
authorizers can be implemente [...]
-| super.users                                      | String     | (None)       
                                                                                
                                                                         | A 
semicolon-separated list of superusers who have unrestricted access to all 
operations and resources. Note that the delimiter is semicolon since SSL user 
names may contain comma, and each super user should be specified in the format 
`principal_type:principa [...]
+| default.bucket.number                            | Integer    | 1            
                                                                                
                                                                         | The 
default number of buckets for a table in Fluss cluster. It's a cluster-level 
parameter and all the tables without specifying bucket number in the cluster 
will use the value as the bucket number.                                        
                     [...]
+| default.replication.factor                       | Integer    | 1            
                                                                                
                                                                         | The 
default replication factor for the log of a table in Fluss cluster. It's a 
cluster-level parameter, and all the tables without specifying replication 
factor in the cluster will use the value as replication factor.                 
                         [...]
+| remote.data.dir                                  | String     | (None)       
                                                                                
                                                                         | The 
directory used for storing the kv snapshot data files and remote log for log 
tiered storage in a Fluss supported filesystem.                                 
                                                                                
                  [...]
+| remote.fs.write-buffer-size                      | MemorySize | 4kb          
                                                                                
                                                                         | The 
default size of the write buffer for writing the local files to remote file 
systems.                                                                        
                                                                                
                   [...]
+| plugin.classloader.parent-first-patterns.default | String     | 
java.,<br/>com.alibaba.fluss.,<br/>javax.annotation.,<br/>org.slf4j,<br/>org.apache.log4j,<br/>org.apache.logging,<br/>org.apache.commons.logging,<br/>ch.qos.logback
 | A (semicolon-separated) list of patterns that specifies which classes should 
always be resolved through the plugin parent ClassLoader first. A pattern is a 
simple prefix that is checked against the fully qualified class name. This 
setting should generally no [...]
+| auto-partition.check.interval                    | Duration   | 10min        
                                                                                
                                                                         | The 
interval of auto partition check. The default value is 10 minutes.              
                                                                                
                                                                                
               [...]
+| max.partition.num                                | Integer    | 1000         
                                                                                
                                                                         | 
Limits the maximum number of partitions that can be created for a partitioned 
table to avoid creating too many partitions.                                    
                                                                                
                     [...]
+| max.bucket.num                                   | Integer    | 128000       
                                                                                
                                                                         | The 
maximum number of buckets that can be created for a table. The default value is 
128000.                                                                         
                                                                                
               [...]
+| acl.notification.expiration-time                 | Duration   | 15min        
                                                                                
                                                                         | The 
duration for which ACL notifications are valid before they expire. This 
configuration determines the time window during which an ACL notification is 
considered active. After this duration, the notification will no longer be 
valid and will be discarded. T [...]
+| authorizer.enabled                               | Boolean    | false        
                                                                                
                                                                         | 
Specifies whether to enable the authorization feature. If enabled, access 
control is enforced based on the authorization rules defined in the 
configuration. If disabled, all operations and resources are accessible to all 
users.                                [...]
+| authorizer.type                                  | String     | default      
                                                                                
                                                                         | 
Specifies the type of authorizer to be used for access control. This value 
corresponds to the identifier of the authorization plugin. The default value is 
`default`, which indicates the built-in authorizer implementation. Custom 
authorizers can be implemente [...]
+| super.users                                      | String     | (None)       
                                                                                
                                                                         | A 
semicolon-separated list of superusers who have unrestricted access to all 
operations and resources. Note that the delimiter is semicolon since SSL user 
names may contain comma, and each super user should be specified in the format 
`principal_type:principa [...]
 
 
 ## CoordinatorServer
@@ -57,8 +57,8 @@ during the Fluss cluster working.
 | Option                                     | Type       | Default         | 
Description                                                                     
                                                                                
                                                                                
                                                                                
                                                                                
               [...]
 
|--------------------------------------------|------------|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 [...]
 | tablet-server.id                           | Integer    | (None)          | 
The id for the tablet server.                                                   
                                                                                
                                                                                
                                                                                
                                                                                
               [...]
-| tablet-server.rack                         | String     | (None)          | 
The rack for the tabletServer. This will be used in rack aware bucket 
assignment for fault tolerance. Examples: `RACK1`, `cn-hangzhou-server10`       
                                                                                
                                                                                
                                                                                
                         [...]
-| data.dir                                   | String     | /tmp/fluss-data | 
This configuration controls the directory where fluss will store its data. The 
default value is /tmp/fluss-data                                                
                                                                                
                                                                                
                                                                                
                [...]
+| tablet-server.rack                         | String     | (None)          | 
The rack for the TabletServer. This will be used in rack aware bucket 
assignment for fault tolerance. Examples: `RACK1`, `cn-hangzhou-server10`       
                                                                                
                                                                                
                                                                                
                         [...]
+| data.dir                                   | String     | /tmp/fluss-data | 
This configuration controls the directory where Fluss will store its data. The 
default value is /tmp/fluss-data                                                
                                                                                
                                                                                
                                                                                
                [...]
 | server.writer-id.expiration-time           | Duration   | 7d              | 
The time that the tablet server will wait without receiving any write request 
from a client before expiring the related status. The default value is 7 days.  
                                                                                
                                                                                
                                                                                
                 [...]
 | server.writer-id.expiration-check-interval | Duration   | 10min           | 
The interval at which to remove writer ids that have expired due to 
`server.writer-id.expiration-time passing. The default value is 10 minutes.     
                                                                                
                                                                                
                                                                                
                           [...]
 | server.background.threads                  | Integer    | 10              | 
The number of threads to use for various background processing tasks. The 
default value is 10.                                                            
                                                                                
                                                                                
                                                                                
                     [...]
@@ -97,11 +97,11 @@ during the Fluss cluster working.
 
|------------------------------------------------|------------|----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 [...]
 | log.segment.file-size                          | MemorySize | 1024m          
| This configuration controls the segment file size for the log. Retention and 
cleaning is always done a file at a time so a larger segment size means fewer 
files but less granular control over retention.                                 
                                                                                
                                                                                
                 [...]
 | log.index.file-size                            | MemorySize | 10m            
| This configuration controls the size of the index that maps offsets to file 
positions. We preallocate this index file and shrink it only after log rolls. 
You generally should not need to change this setting.                           
                                                                                
                                                                                
                  [...]
-| log.index.interval-size                        | MemorySize | 4k             
| This setting controls how frequently fluss adds an index entry to its offset 
index. The default setting ensures that we index a message roughly every 4096 
bytes. More indexing allows reads to jump closer to the exact position in the 
log but makes the index larger. You probably don't need to change this.         
                                                                                
                   [...]
+| log.index.interval-size                        | MemorySize | 4k             
| This setting controls how frequently Fluss adds an index entry to its offset 
index. The default setting ensures that we index a message roughly every 4096 
bytes. More indexing allows reads to jump closer to the exact position in the 
log but makes the index larger. You probably don't need to change this.         
                                                                                
                   [...]
 | log.file-preallocate                           | Boolean    | false          
| True if we should preallocate the file on disk when creating a new log 
segment.                                                                        
                                                                                
                                                                                
                                                                                
                     [...]
 | log.flush.interval-messages                    | Long       | Long.MAX_VALUE 
| This setting allows specifying an interval at which we will force a fsync of 
data written to the log. For example if this was set to 1, we would fsync after 
every message; if it were 5 we would fsync after every five messages.           
                                                                                
                                                                                
               [...]
 | log.replica.high-watermark.checkpoint-interval | Duration   | 5s             
| The frequency with which the high watermark is saved out to disk. The default 
setting is 5 seconds.                                                           
                                                                                
                                                                                
                                                                                
              [...]
-| log.replica.max-lag-time                       | Duration   | 30s            
| If a follower replica hasn't sent any fetch log requests or hasn't consumed 
up the leaders log end offset for at least this time, the leader will remove 
the follower replica form isr                                                   
                                                                                
                                                                                
                   [...]
+| log.replica.max-lag-time                       | Duration   | 30s            
| If a follower replica hasn't sent any fetch log requests or hasn't consumed 
up the leaders log end offset for at least this time, the leader will remove 
the follower replica from isr                                                   
                                                                                
                                                                                
                   [...]
 | log.replica.write-operation-purge-number       | Integer    | 1000           
| The purge number (in number of requests) of the write operation manager, the 
default value is 1000.                                                          
                                                                                
                                                                                
                                                                                
               [...]
 | log.replica.fetch-operation-purge-number       | Integer    | 1000           
| The purge number (in number of requests) of the fetch log operation manager, 
the default value is 1000.                                                      
                                                                                
                                                                                
                                                                                
               [...]
 | log.replica.fetcher-number                     | Integer    | 1              
| Number of fetcher threads used to replicate log records from each source 
tablet server. The total number of fetchers on each tablet server is bound by 
this parameter multiplied by the number of tablet servers in the cluster. 
Increasing this value can increase the degree of I/O parallelism in the 
follower and leader tablet server at the cost of higher CPU and memory 
utilization.                                [...]
@@ -163,7 +163,7 @@ during the Fluss cluster working.
 
 | Option          | Type | Default | Description                               
                                                                                
|
 
|-----------------|------|---------|---------------------------------------------------------------------------------------------------------------------------|
-| datalake.format | ENUM | (None)  | The datalake format used by of Fluss to 
be as lakehouse storage, such as Paimon, Iceberg, Hudi. Now, only support 
Paimon. |
+| datalake.format | Enum | (None)  | The datalake format used by of Fluss to 
be as lakehouse storage, such as Paimon, Iceberg, Hudi. Now, only support 
Paimon. |
 
 ## Kafka
 
@@ -173,7 +173,7 @@ Kafka protocol compatibility is still in development.
 
 | Option                         | Type     | Default | Description            
                                                                                
            |
 
|--------------------------------|----------|---------|--------------------------------------------------------------------------------------------------------------------|
-| kafka.enabled                  | boolean  | false   | Whether enable fluss 
kafka. Disabled by default. When this option is set to true, the fluss kafka 
will be enabled. |
+| kafka.enabled                  | Boolean  | false   | Whether enable Fluss 
Kafka. Disabled by default. When this option is set to true, the Fluss Kafka 
will be enabled. |
 | kafka.listener.names           | String   | KAFKA   | The listener names for 
Kafka wire protocol communication. Support multiple listener names, separated 
by comma.     |
-| kafka.database                 | String   | kafka   | The database for fluss 
kafka. The default database is `kafka`.                                         
            |
+| kafka.database                 | String   | kafka   | The database for Fluss 
Kafka. The default database is `kafka`.                                         
            |
 | kafka.connection.max-idle-time | Duration | 60s     | Close kafka idle 
connections after the given time specified by this config.                      
                  |
diff --git a/website/docs/maintenance/observability/logging.md 
b/website/docs/maintenance/observability/logging.md
index 9ab8ba806..986f0fb73 100644
--- a/website/docs/maintenance/observability/logging.md
+++ b/website/docs/maintenance/observability/logging.md
@@ -13,7 +13,7 @@ By default, [Log4j 
2](https://logging.apache.org/log4j/2.x/index.html) is used a
 
 ## Configuring Log4j 2
 ### Log4j 2 property files
-The Fluss distribution ships with the following log4j properties files in the 
conf directory, which are used automatically if Log4j 2 is enabled:
+The Fluss distribution ships with the following Log4j properties files in the 
conf directory, which are used automatically if Log4j 2 is enabled:
 * `log4j-console.properties`: used for CoordinatorServer/TabletServer if they 
are run in the foreground (e.g., Kubernetes).
 * `log4j.properties`: used for CoordinatorServer/TabletServer by default.
 
@@ -43,12 +43,12 @@ To use Fluss with [Log4j 
1](https://logging.apache.org/log4j/1.2/) you must ensu
 For Fluss distributions this means you have to:
 * remove the `log4j-core`, `log4j-slf4j-impl` and `log4j-1.2-api` jars from 
the lib directory,
 * add the `log4j`, `slf4j-log4j12` and `log4j-to-slf4j` jars to the lib 
directory,
-* replace all log4j properties files in the conf directory with 
Log4j1-compliant versions.
+* replace all Log4j properties files in the conf directory with 
Log4j1-compliant versions.
 
 In the IDE this means you have to replace such dependencies defined in your 
pom, and possibly add exclusions on dependencies that transitively depend on 
them.
 
-## Configuring logback
-To use Fluss with [logback](https://logback.qos.ch/) you must ensure that:
+## Configuring Logback
+To use Fluss with [Logback](https://logback.qos.ch/) you must ensure that:
 * `org.apache.logging.log4j:log4j-slf4j-impl` is not on the classpath,
 * `ch.qos.logback:logback-core` and `ch.qos.logback:logback-classic` are on 
the classpath.
 
@@ -57,10 +57,10 @@ For Fluss distributions this means you have to:
 * add the `logback-core`, and `logback-classic` jars to the lib directory.
 
 :::info
-Fluss currently uses SLF4J 1.7.x, which is _incompatible_ with logback 1.3.0 
and higher.
+Fluss currently uses SLF4J 1.7.x, which is _incompatible_ with Logback 1.3.0 
and higher.
 :::
 
-The Fluss distribution ships with the following logback configuration files in 
the conf directory, which are used automatically if logback is enabled:
+The Fluss distribution ships with the following Logback configuration files in 
the conf directory, which are used automatically if logback is enabled:
 * `logback-console.xml`: used for CoordinatorServer/TabletServer if they are 
run in the foreground (e.g., Kubernetes).
 * `logback.xml`: used for CoordinatorServer/TabletServer by default.
 
diff --git a/website/docs/maintenance/observability/monitor-metrics.md 
b/website/docs/maintenance/observability/monitor-metrics.md
index cadeb51bd..ecef62312 100644
--- a/website/docs/maintenance/observability/monitor-metrics.md
+++ b/website/docs/maintenance/observability/monitor-metrics.md
@@ -745,7 +745,7 @@ When using Flink to read and write, Fluss has implemented 
some key standard Flin
 to measure the source latency and output of sink, see [FLIP-33: Standardize 
Connector 
Metrics](https://cwiki.apache.org/confluence/display/FLINK/FLIP-33%3A+Standardize+Connector+Metrics).
 
 Flink source / sink metrics implemented are listed here.
 
-How to use flink metrics, you can see [flink 
metrics](https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/ops/metrics/#system-metrics)
 for more details.
+How to Use Flink Metrics, you can see [Flink 
Metrics](https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/ops/metrics/#system-metrics)
 for more details.
 
 #### Source Metrics
 
diff --git a/website/docs/maintenance/observability/quickstart.md 
b/website/docs/maintenance/observability/quickstart.md
index 4018a8831..f4a736f92 100644
--- a/website/docs/maintenance/observability/quickstart.md
+++ b/website/docs/maintenance/observability/quickstart.md
@@ -12,7 +12,7 @@ On this page, you can find the following guides to set up an 
observability stack
 
 ## Observability with Prometheus, Loki and Grafana
 
-We provide a minimal quickstart configuration for application observability 
with Prometheus (metric aggregation system), Loki (log aggregation sytem) and 
Grafana (dashboard system). 
+We provide a minimal quickstart configuration for application observability 
with Prometheus (metric aggregation system), Loki (log aggregation system) and 
Grafana (dashboard system). 
 
 The quickstart configuration comes with 2 metric dashboards.
 
@@ -69,7 +69,7 @@ Detailed configuration instructions for Fluss and Logback can 
be found [here](lo
 3. Additionally, you need to adapt the `docker-compose.yml` and 
 
 - add containers for Prometheus, Loki and Grafana and mount the corresponding 
configuration directories.
-- build and use the new Fluss image manifest 
(`fluss-sfl4j-logback.Dockerfile`).
+- build and use the new Fluss image manifest 
(`fluss-slf4j-logback.Dockerfile`).
 - configure Fluss to expose metrics via Prometheus.
 - add the desired application name that should be used when displaying logs in 
Grafana as environment variable (`APP_NAME`).
 - configure Flink to expose metrics via Prometheus.
diff --git a/website/docs/security/authentication.md 
b/website/docs/security/authentication.md
index 4713ad228..2f09d3134 100644
--- a/website/docs/security/authentication.md
+++ b/website/docs/security/authentication.md
@@ -26,7 +26,7 @@ No additional configuration is required for this mode.
 ## SASL
 This mechanism is based on SASL (Simple Authentication and Security Layer) 
authentication. Currently, only SASL/PLAIN is supported, which involves 
authentication using a username and password. It is recommended for production 
environments.
 
-### SASL server side configuration
+### SASL Client-Side Configuration
 | Option                                                         | Type   | 
Default Value | Description                                                     
                                                                      |
 
|----------------------------------------------------------------|--------|---------------|---------------------------------------------------------------------------------------------------------------------------------------|
 | security.sasl.enabled.mechanisms                               | List   | 
PLAIN         | Comma-separated list of enabled SASL mechanisms. Only support 
PLAIN(which involves authentication using a username and password) now. |
@@ -52,7 +52,7 @@ security.sasl.plain.jaas.config: 
com.alibaba.fluss.security.auth.sasl.plain.Plai
 ```
 
 
-### SASL client side configuration
+### SASL Client-Side Configuration
 Clients must specify the appropriate security protocol and authentication 
mechanism when connecting to Fluss brokers.
 
 | Option                           | Type   | Default Value | Description      
                                                                                
                                                                                
                                                                                
                         |
@@ -86,10 +86,10 @@ Fluss supports custom authentication logic through its 
plugin architecture.
 Steps to implement a custom authenticator:
 1. **Implement AuthenticationPlugin Interfaces**: 
 Implement `ClientAuthenticationPlugin` for client-side logic and implement 
`ServerAuthenticationPlugin` for server-side logic.
-2.  **Server-side Plugin Installation**:
+2.  **Server-Side Plugin Installation**:
 Build the plugin as a standalone JAR and copy it to the Fluss server’s plugin 
directory: `<FLUSS_HOME>/plugins/<custom_auth_plugin>/`. The server will 
automatically load the plugin at startup.
-3.  **Client-side Plugin Packaging**  :
+3.  **Client-Side Plugin Packaging**  :
 To enable plugin functionality on the client side, include the plugin JAR in 
your application’s classpath. This allows the Fluss client to auto-discover the 
plugin during runtime.
-4. **Configure the desired protocol**:
+4. **Configure the Desired Protocol**:
   * `security.protocol.map` – for server-side listener authentication and use 
the `com.alibaba.fluss.security.auth.AuthenticationPlugin#authProtocol()` as 
protocol identifier.
   * `client.security.protocol` – for client-side authentication and use the 
`com.alibaba.fluss.security.auth.AuthenticationPlugin#authProtocol()` as 
protocol identifier
\ No newline at end of file
diff --git a/website/docs/security/authorization.md 
b/website/docs/security/authorization.md
index 7c4c3c341..67533eb21 100644
--- a/website/docs/security/authorization.md
+++ b/website/docs/security/authorization.md
@@ -77,7 +77,7 @@ Below is a summary of the currently public protocols and 
their relationship with
 | --- | --- | --- | --- |
 | CREATE_DATABASE | CREATE | Cluster | |
 | DROP_DATABASE | DELETE | Database | |
-| LIST_DATABASES | DESCIBE | Database | Only databases that the user has 
permission to access are returned. Databases for which the user lacks 
sufficient privileges are automatically filtered from the results.  |
+| LIST_DATABASES | DESCRIBE | Database | Only databases that the user has 
permission to access are returned. Databases for which the user lacks 
sufficient privileges are automatically filtered from the results.  |
 | CREATE_TABLE | CREATE | Database | |
 | DROP_TABLE | DELETE | Table | |
 | GET_TABLE_INFO | DESCRIBE | Table | |
@@ -195,9 +195,9 @@ CALL [catalog].sys.drop_acl(
 
 Fluss supports custom authorization logic through its plugin architecture.
 
-Steps to implement a custom authorization logic:
+Steps to Implement a Custom Authorization Logic:
 1. **Implement `AuthorizationPlugin` Interfaces**.
-2.  **Server-side Plugin Installation**:
+2.  **Server-Side Plugin Installation**:
     Build the plugin as a standalone JAR and copy it to the Fluss server’s 
plugin directory: `<FLUSS_HOME>/plugins/<custom_auth_plugin>/`. The server will 
automatically load the plugin at startup.
-3. **Configure the desired protocol**: Set  
`com.alibaba.fluss.server.authorizer.AuthorizationPlugin.identifier` as the 
value of `authorizer.type` in the Fluss server configuration file.
+3. **Configure the Desired Protocol**: Set  
`com.alibaba.fluss.server.authorizer.AuthorizationPlugin.identifier` as the 
value of `authorizer.type` in the Fluss server configuration file.
 
diff --git a/website/docs/security/overview.md 
b/website/docs/security/overview.md
index 974f57623..aecbf87a5 100644
--- a/website/docs/security/overview.md
+++ b/website/docs/security/overview.md
@@ -79,18 +79,18 @@ For example:
 ### Fluss Principal – The Bridge Between Authentication and Authorization
 Once a client successfully authenticates, Fluss creates a **Fluss Principal**, 
which represents the authenticated identity. This  **Fluss Principal** is used 
throughout the system during authorization checks.
 
-The principal type indicates the category of the principal (e. g., "User", 
"Group", "Role"), while the name identifies the specific entity within that 
category. By default, the simple authorizer uses "User" as the principal type, 
but custom authorizers can extend this to support role-based or group-based 
access control lists (ACLs).
+The principal type indicates the category of the principal (e.g., "User", 
"Group", "Role"), while the name identifies the specific entity within that 
category. By default, the simple authorizer uses "User" as the principal type, 
but custom authorizers can extend this to support role-based or group-based 
access control lists (ACLs).
 Example usage:
 * `new FlussPrincipal("admin", "User")` – A standard user principal.
 * `new FlussPrincipal("admins", "Group")` – A group-based principal for 
authorization.
 
 ### Enable Authorization and Assign Super Users
-Fluss provides a pluggable authorization framework that uses Access Control 
Lists (ACLs) to determine whether a given FlussPrincipal is allowed to perform 
an operation on a specific resource.To enable authorization, you need to 
configure the following properties:
+Fluss provides a pluggable authorization framework that uses Access Control 
Lists (ACLs) to determine whether a given Fluss Principal is allowed to perform 
an operation on a specific resource.To enable authorization, you need to 
configure the following properties:
 | Option             | Type    | Default Value | Description                   
                                                                                
                                                                                
                                                                                
                                 |
 
|--------------------|---------|---------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
 | authorizer.enabled | Boolean | false         | Specifies whether to enable 
the authorization feature.                                                      
                                                                                
                                                                                
                                   |
 | authorizer.type    | String  | default       | Specifies the type of 
authorizer to be used for access control. This value corresponds to the 
identifier of the authorization plugin. The default value is `default`, which 
indicates the built-in authorizer implementation. Custom authorizers can be 
implemented by providing a matching plugin identifier. |
-| super.users                                      | String     | (None)       
                                                                                
                                                                         | A 
semicolon-separated list of superusers who have unrestricted access to all 
operations and resources. Note that the delimiter is semicolon since SSL user 
names may contain comma, and each super user should be specified in the format 
`principal_type:principa [...]
+| super.users                                      | String     | (None)       
                                                                                
                                                                         | A 
semicolon-separated list of  super users who have unrestricted access to all 
operations and resources. Note that the delimiter is semicolon since SSL user 
names may contain comma, and each super user should be specified in the format 
`principal_type:princi [...]
 
 
 ## Security Workflow When Client Established a Connection
@@ -116,13 +116,13 @@ security.protocol.map: CLIENT:SASL, INTERNAL:PLAINTEXT
 
    If the credentials are valid and the authentication is successful, and the 
client is allowed to proceed with further operations. Otherwise, the connection 
is rejected, and an authentication error is returned.
 
-4. Fluss Principal Is created
+4. Fluss Principal is created
 
    Upon successful authentication, Fluss creates a FlussPrincipal, 
representing the identity of the authenticated user
 
 5. Authorization check occur
 
-   Before allowing any operation (like producing or consuming data), Fluss 
checks whether the FlussPrincipal has permission to perform that action based 
on configured ACL rules.
+   Before allowing any operation (like producing or consuming data), Fluss 
checks whether the Fluss Principal has permission to perform that action based 
on configured ACL rules.
 
 6. Response to the client
 
diff --git a/website/docs/table-design/data-types.md 
b/website/docs/table-design/data-types.md
index 0de09aaaa..4fa496aaa 100644
--- a/website/docs/table-design/data-types.md
+++ b/website/docs/table-design/data-types.md
@@ -7,24 +7,24 @@ sidebar_position: 10
 
 Fluss has a rich set of native data types available to users. All the data 
types of Fluss are as follows:
 
-| DataType                                                                 | 
Description                                                                     
                                                                                
                                                                                
                                                                                
                                                                                
                [...]
-|--------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 [...]
-| BOOLEAN                                                                  | A 
boolean with a (possibly) three-valued logic of TRUE, FALSE, UNKNOWN.           
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
-| TINYINT                                                                  | A 
1-byte signed integer with values from -128 to 127.                             
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
-| SMALLINT                                                                 | A 
2-byte signed integer with values from -32,768 to 32,767.                       
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
-| INT                                                                      | A 
4-byte signed integer with values from -2,147,483,648 to 2,147,483,647.         
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
-| BIGINT                                                                   | 
An 8-byte signed integer with values from -9,223,372,036,854,775,808 to 
9,223,372,036,854,775,807.                                                      
                                                                                
                                                                                
                                                                                
                        [...]
-| FLOAT                                                                    | A 
4-byte single precision floating point number.                                  
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
-| DOUBLE                                                                   | 
An 8-byte double precision floating point number.                               
                                                                                
                                                                                
                                                                                
                                                                                
                [...]
-| CHAR(n)                                                                  | A 
fixed-length character string where n is the number of code points. n must have 
a value between 1 and Integer.MAX_VALUE (both inclusive).                       
                                                                                
                                                                                
                                                                                
              [...]
-| STRING                                                                   | A 
variable-length character string.                                               
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
-| DECIMAL(p, s)                                                            | A 
decimal number with fixed precision and scale where p is the number of digits 
in a number (=precision) and s is the number of digits to the right of the 
decimal point in a number (=scale). p must have a value between 1 and 38 (both 
inclusive). s must have a value between 0 and p (both inclusive).               
                                                                                
                      [...]
-| DATE                                                                     | A 
date consisting of year-month-day with values ranging from 0000-01-01 to 
9999-12-31. <br/>Compared to the SQL standard, the range starts at year 0000.   
                                                                                
                                                                                
                                                                                
                     [...]
-| TIME                                                                     | A 
time WITHOUT time zone with no fractional seconds by default. <br/> An instance 
consists of `hour:minute:second` with up to second precision and values ranging 
from 00:00:00 to 23:59:59. <br/>Compared to the SQL standard, leap seconds 
(23:59:60 and 23:59:61) are not supported as the semantics are closer to 
java.time.LocalTime. A time WITH time zone is not provided.                     
                          [...]
-| TIME(p)                                                                  | A 
time WITHOUT time zone where p is the number of digits of fractional seconds 
(=precision). p must have a value between 0 and 9 (both inclusive).<br/> An 
instance consists of `hour:minute:second[.fractional]` with up to nanosecond 
precision and values ranging from 00:00:00.000000000 to 23:59:59.999999999. 
<br/>Compared to the SQL standard, leap seconds (23:59:60 and 23:59:61) are not 
supported as the semantics  [...]
-| TIMESTAMP                                                                | A 
timestamp WITHOUT time zone with 6 digits of fractional seconds by 
default.<br/> An instance consists of `year-month-day 
hour:minute:second[.fractional]` with up to microsecond precision and values 
ranging from 0000-01-01 00:00:00.000000 to 9999-12-31 23:59:59.999999. 
<br/>Compared to the SQL standard, leap seconds (23:59:60 and 23:59:61) are not 
supported as the semantics are closer to java. time. LocalDateTi [...]
-| TIMESTAMP(p)                                                             | A 
timestamp WITHOUT time zone where p is the number of digits of fractional 
seconds (=precision). p must have a value between 0 and 9 (both inclusive). 
<br/>An instance consists of `year-month-day hour:minute:second[.fractional]` 
with up to nanosecond precision and values ranging from 0000-01-01 
00:00:00.000000000 to 9999-12-31 23:59:59.999999999.<br/>Compared to the SQL 
standard, leap seconds (23:59:60 and 23:5 [...]
-| TIMESTAMP_LTZ                                                            | A 
timestamp WITH time zone `TIMESTAMP WITH TIME ZONE` with 6 digits of fractional 
seconds by default. <br/>An instance consists of `year-month-day 
hour:minute:second[.fractional]` zone with up to microsecond precision and 
values ranging from 0000-01-01 00:00:00.000000 +14:59 to 9999-12-31 
23:59:59.999999 -14:59. <br/> Compared to the SQL standard, leap seconds 
(23:59:60 and 23:59:61) are not supported as the sem [...]
-| TIMESTAMP_LTZ(p)                                                         | A 
timestamp WITH time zone `TIMESTAMP WITH TIME ZONE` where p is the number of 
digits of fractional seconds (=precision). p must have a value between 0 and 9 
(both inclusive). <br/>An instance consists of `year-month-day 
hour:minute:second[.fractional]` with up to nanosecond precision and values 
ranging from 0000-01-01 00:00:00.000000000 to 9999-12-31 23:59:59.999999999. 
<br/> Compared to the SQL standard, leap  [...]
-| BINARY(n)                                                                | A 
fixed-length binary string (=a sequence of bytes) where n is the number of 
bytes. n must have a value between 1 and Integer.MAX_VALUE (both inclusive).    
                                                                                
                                                                                
                                                                                
                   [...]
-| BYTES                                                                    | A 
variable-length binary string (=a sequence of bytes).                           
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
+| DataType                                                                 | 
Description                                                                     
                                                                                
                                                                                
                                                                                
                                                                                
                [...]
+|--------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 [...]
+| BOOLEAN                                                                  | A 
boolean with a (possibly) three-valued logic of TRUE, FALSE, UNKNOWN.           
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
+| TINYINT                                                                  | A 
1-byte signed integer with values from -128 to 127.                             
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
+| SMALLINT                                                                 | A 
2-byte signed integer with values from -32,768 to 32,767.                       
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
+| INT                                                                      | A 
4-byte signed integer with values from -2,147,483,648 to 2,147,483,647.         
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
+| BIGINT                                                                   | 
An 8-byte signed integer with values from -9,223,372,036,854,775,808 to 
9,223,372,036,854,775,807.                                                      
                                                                                
                                                                                
                                                                                
                        [...]
+| FLOAT                                                                    | A 
4-byte single precision floating point number.                                  
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
+| DOUBLE                                                                   | 
An 8-byte double precision floating point number.                               
                                                                                
                                                                                
                                                                                
                                                                                
                [...]
+| CHAR(n)                                                                  | A 
fixed-length character string where n is the number of code points. n must have 
a value between 1 and Integer.MAX_VALUE (both inclusive).                       
                                                                                
                                                                                
                                                                                
              [...]
+| STRING                                                                   | A 
variable-length character string.                                               
                                                                                
                                                                                
                                                                                
                                                                                
              [...]
+| DECIMAL(p, s)                                                            | A 
decimal number with fixed precision and scale where p is the number of digits 
in a number (=precision) and s is the number of digits to the right of the 
decimal point in a number (=scale). p must have a value between 1 and 38 (both 
inclusive). s must have a value between 0 and p (both inclusive).               
                                                                                
                      [...]
+| DATE                                                                     | A 
date consisting of year-month-day with values ranging from 0000-01-01 to 
9999-12-31. <br/>Compared to the SQL standard, the range starts at year 0000.   
                                                                                
                                                                                
                                                                                
                     [...]
+| TIME                                                                     | A 
time WITHOUT time zone with no fractional seconds by default. <br/> An instance 
consists of `hour:minute:second` with up to second precision and values ranging 
from 00:00:00 to 23:59:59. <br/>Compared to the SQL standard, leap seconds 
(23:59:60 and 23:59:61) are not supported as the semantics are closer to 
java.time.LocalTime. A time WITH time zone is not provided.                     
                          [...]
+| TIME(p)                                                                  | A 
time WITHOUT time zone where p is the number of digits of fractional seconds 
(=precision). p must have a value between 0 and 9 (both inclusive).<br/> An 
instance consists of `hour:minute:second[.fractional]` with up to nanosecond 
precision and values ranging from 00:00:00.000000000 to 23:59:59.999999999. 
<br/>Compared to the SQL standard, leap seconds (23:59:60 and 23:59:61) are not 
supported as the semantics  [...]
+| TIMESTAMP                                                                | A 
timestamp WITHOUT time zone with 6 digits of fractional seconds by 
default.<br/> An instance consists of `year-month-day 
hour:minute:second[.fractional]` with up to microsecond precision and values 
ranging from 0000-01-01 00:00:00.000000 to 9999-12-31 23:59:59.999999. 
<br/>Compared to the SQL standard, leap seconds (23:59:60 and 23:59:61) are not 
supported as the semantics are closer to java.time.LocalDateTime [...]
+| TIMESTAMP(p)                                                             | A 
timestamp WITHOUT time zone where p is the number of digits of fractional 
seconds (=precision). p must have a value between 0 and 9 (both inclusive). 
<br/>An instance consists of `year-month-day hour:minute:second[.fractional]` 
with up to nanosecond precision and values ranging from 0000-01-01 
00:00:00.000000000 to 9999-12-31 23:59:59.999999999.<br/>Compared to the SQL 
standard, leap seconds (23:59:60 and 23:5 [...]
+| TIMESTAMP_LTZ                                                            | A 
timestamp WITH time zone `TIMESTAMP WITH TIME ZONE` with 6 digits of fractional 
seconds by default. <br/>An instance consists of `year-month-day 
hour:minute:second[.fractional]` zone with up to microsecond precision and 
values ranging from 0000-01-01 00:00:00.000000 +14:59 to 9999-12-31 
23:59:59.999999 -14:59. <br/> Compared to the SQL standard, leap seconds 
(23:59:60 and 23:59:61) are not supported as the sem [...]
+| TIMESTAMP_LTZ(p)                                                         | A 
timestamp WITH time zone `TIMESTAMP WITH TIME ZONE` where p is the number of 
digits of fractional seconds (=precision). p must have a value between 0 and 9 
(both inclusive). <br/>An instance consists of `year-month-day 
hour:minute:second[.fractional]` with up to nanosecond precision and values 
ranging from 0000-01-01 00:00:00.000000000 to 9999-12-31 23:59:59.999999999. 
<br/> Compared to the SQL standard, leap  [...]
+| BINARY(n)                                                                | A 
fixed-length binary string (=a sequence of bytes) where n is the number of 
bytes. n must have a value between 1 and Integer.MAX_VALUE (both inclusive).    
                                                                                
                                                                                
                                                                                
                   [...]
+| BYTES                                                                    | A 
variable-length binary string (=a sequence of bytes).                           
                                                                                
                                                                                
                                                                                
                                                                                
              [...]

Reply via email to