FlyingZC opened a new issue, #29186:
URL: https://github.com/apache/shardingsphere/issues/29186

   # Background
   Hi community.
   This issue is for #29149.
   
   The ShardingSphere SQL parser engine helps users to parse SQL to create the 
AST (Abstract Syntax Tree) and visit the AST to get SQLStatement (Java Object). 
Currently, we are planning to enhance the support for `SQL Server` SQL parsing 
in ShardingSphere.
   
   More details:
   
https://shardingsphere.apache.org/document/current/en/reference/sharding/parse/
   
   # Task
   This issue is to support more `SQL Server` sql parsing, as follows:
   ```sql
   SELECT DB_NAME(database_id) DbName,
          name LogName,
          physical_name,
          type_desc ,
          CONVERT(bigint, SIZE)*8/1024 LogFile_Size_MB ,
          CONVERT(bigint,max_size)*8/1024 LogFile_MaxSize_MB ,
          (SIZE*8.0/1024)/(max_size*8.0/1024)*100 percent_full_of_max_size,
          CASE WHEN growth = 0 THEN 'AUTOGROW_DISABLED' ELSE 'Autogrow_Enabled' 
END as AutoGrow
   FROM sys.master_files
   WHERE file_id = 2
       AND (SIZE*8.0/1024)/(max_size*8.0/1024)*100 > 90
       AND max_size not in (-1, 268435456)
       OR growth = 0
   
   if @@ROWCOUNT > 0
   BEGIN
       DECLARE @db_name_max_size sysname, @log_name_max_size sysname, 
@configured_max_log_boundary bigint, @auto_grow int
       
       DECLARE reached_max_size CURSOR FOR
           SELECT db_name(database_id),
                  name,
                  CONVERT(bigint, SIZE)*8/1024,
                  growth
           FROM sys.master_files
           WHERE file_id = 2
               AND ( (SIZE*8.0/1024)/(max_size*8.0/1024)*100 > 90
               AND max_size not in (-1, 268435456)
               OR growth = 0 )
   
   
       OPEN reached_max_size
   
       FETCH NEXT FROM reached_max_size into @db_name_max_size , 
@log_name_max_size, @configured_max_log_boundary, @auto_grow 
   
       WHILE @@FETCH_STATUS = 0
       BEGIN
           IF @auto_grow = 0
             BEGIN
               SELECT 'The database "' + @db_name_max_size+'" contains a log 
file "' + @log_name_max_size + '" whose autogrow has been DISABLED' as Finding
               SELECT 'Consider enabling autogrow or increasing file size via 
these ALTER DATABASE commands' as Recommendation
               SELECT 'ALTER DATABASE ' + @db_name_max_size + ' MODIFY FILE ( 
NAME = N''' + @log_name_max_size + ''', FILEGROWTH = 65536KB)' as AutoGrowth
             END
           ELSE
             BEGIN
               SELECT 'The database "' + @db_name_max_size+'" contains a log 
file "' + @log_name_max_size + '" whose max limit is set to ' + 
convert(varchar(24), @configured_max_log_boundary) + ' MB and this limit has 
been reached!' as Finding
               SELECT 'Consider using one of the below ALTER DATABASE commands 
to either change the log file size or add a new file' as Recommendation
             END
           
           SELECT 'ALTER DATABASE ' + @db_name_max_size + ' MODIFY FILE ( NAME 
= N''' + @log_name_max_size + ''', MAXSIZE = UNLIMITED)' as UnlimitedSize
           SELECT 'ALTER DATABASE ' + @db_name_max_size + ' MODIFY FILE ( NAME 
= N''' + @log_name_max_size + ''', MAXSIZE = something_larger_than_' + 
CONVERT(varchar(24), @configured_max_log_boundary) +'MB )' as IncreasedSize
           SELECT 'ALTER DATABASE ' + @db_name_max_size + ' ADD LOG FILE ( NAME 
= N''' + @log_name_max_size + '_new'', FILENAME = N''SOME_FOLDER_LOCATION\' + 
@log_name_max_size + '_NEW.LDF'', SIZE = 81920KB , FILEGROWTH = 65536KB )' as 
AddNewFile
   
           FETCH NEXT FROM reached_max_size into @db_name_max_size , 
@log_name_max_size, @configured_max_log_boundary, @auto_grow
   
       END
   
       CLOSE reached_max_size
       DEALLOCATE reached_max_size
   END
   ELSE
       SELECT 'Found no files that have reached max log file size' as Findings
   ```
   
[link](https://learn.microsoft.com/zh-cn/sql/relational-databases/logs/troubleshoot-a-full-transaction-log-sql-server-error-9002?view=sql-server-ver16)
   
   ```sql
   UPDATE MyTable SET NullCol = N'some_value' WHERE NullCol IS NULL
   ```
   
[link](https://learn.microsoft.com/zh-cn/sql/t-sql/statements/alter-table-transact-sql?view=sql-server-ver16)
   
   ```sql
   SELECT <published_columns> FROM [schema].[tablename] WHERE
   ```
   
[link](https://learn.microsoft.com/zh-cn/sql/relational-databases/replication/publish/define-and-modify-a-static-row-filter?view=sql-server-ver16)
   
   ```sql
   SELECT <published_columns> FROM [HumanResources].[Employee] WHERE [LoginID] 
= 'adventure-works\ranjit0'
   ```
   
[link](https://learn.microsoft.com/zh-cn/sql/relational-databases/replication/publish/define-and-modify-a-static-row-filter?view=sql-server-ver16)
   
   ```sql
   SELECT <published_columns> FROM [Sales].[SalesOrderHeader] WHERE 
MyFunction([Freight]) > 100
   ```
   
[link](https://learn.microsoft.com/zh-cn/sql/relational-databases/replication/publish/define-and-modify-a-static-row-filter?view=sql-server-ver16)
   
   # Process
   1. First confirm that this is a correct `SQL Server` sql syntax, if not 
please leave a message under the issue and ignore it;
   2. Compare SQL definitions in Official SQL Doc and ShardingSphere SQL Doc;
   3. If there is any difference in ShardingSphere SQL Doc, please correct them 
by referring to the Official SQL Doc;
   4. Run mvn install the current_file_module;
   5. Check whether there are any exceptions. If indeed, please fix them. 
(Especially xxxVisitor.class);
   6. Add new corresponding SQL case in SQL Cases and expected parsed result in 
Expected Statement XML;
   7. Run SQLParserParameterizedTest to make sure no exceptions.
   
   # Relevant Skills
   1. Master JAVA language
   2. Have a basic understanding of Antlr `g4` file
   3. Be familiar with `SQL Server` SQLs


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: 
[email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to