dongoewang opened a new issue, #30429: URL: https://github.com/apache/shardingsphere/issues/30429
## Bug Report **For English only**, other languages will not accept. Before report a bug, make sure you have: - Searched open and closed [GitHub issues](https://github.com/apache/shardingsphere/issues). - Read documentation: [ShardingSphere Doc](https://shardingsphere.apache.org/document/current/en/overview). Please pay attention on issues you submitted, because we maybe need more details. If no response anymore and we cannot reproduce it on current information, we will **close it**. Please answer these questions before submitting your issue. Thanks! ### Which version of ShardingSphere did you use? 5.4.0 ### Which project did you use? ShardingSphere-JDBC or ShardingSphere-Proxy? ShardingSphere-JDBC ### Expected behavior value of key is setted by SnowflakeKeyGenerateAlgorithm ### Actual behavior value of key is null ### Reason analyze (If you can) ### Steps to reproduce the behavior, such as: SQL to execute, sharding rule configuration, when exception occur etc. ### Example codes for reproduce this issue (such as a github link). config file: mode: type: Cluster repository: type: ZooKeeper props: namespace: sharding-data-source server-lists: ${zookeeper.ip}:2181 dataSources: datasource: type: com.alibaba.druid.pool.DruidDataSource dataSourceClassName: com.alibaba.druid.pool.DruidDataSource url: ${store.datasource.ip}:3306/${store.datasource.oa}?useUnicode=true&characterEncoding=utf8&zeroDateTimeBehavior=convertToNull&useSSL=true&serverTimezone=GMT%2B8&allowMultiQueries=true&rewriteBatchedStatements=true&allowPublicKeyRetrieval=true username: ${store.datasource.username} password: ${store.datasource.password} driverClassName: com.mysql.cj.jdbc.Driver # 初始化,最小,最大连接数 initialSize: 3 minidle: 3 maxActive: 18 # 获取数据库连接等待的超时时间 maxWait: 60000 # 配置多久进行一次检测,检测需要关闭的空闲连接 单位毫秒 timeBetweenEvictionRunsMillis: 60000 validationQuery: SELECT 1 FROM dual rules: - !SINGLE tables: - "*.*" - !SHARDING tables: kq_attendance_daily: actualDataNodes: datasource.kq_attendance_daily_${202401..208801} tableStrategy: standard: shardingColumn: attendance_date shardingAlgorithmName: clockRecordAutoCustom keyGenerateStrategy: column: id keyGeneratorName: snowflake_generator kq_attendance_daily_clock: actualDataNodes: datasource.kq_attendance_daily_clock_${202401..208801} tableStrategy: standard: shardingColumn: attendance_date shardingAlgorithmName: clockRecordAutoCustom keyGenerateStrategy: column: id keyGeneratorName: snowflake_generator bindingTables: - kq_attendance_daily, kq_attendance_daily_clock shardingAlgorithms: clockRecordAutoCustom: type: CLASS_BASED props: strategy: standard algorithmClassName: com.younike.oa.shardingAlgorithm.ClockRecordAutoShardingAlgorithm # 留着备用,复合分片用 # attendanceDailyDetailAutoCustom: # type: CLASS_BASED # props: # strategy: complex # algorithmClassName: com.younike.oa.shardingAlgorithm.AttendanceDailyDetailComplexKeysAlgorithm keyGenerators: snowflake_generator: type: SNOWFLAKE props: sql-show: true java config @Bean(name = "shardingDataSource") public DataSource shardingDataSource() { DriverDataSourceCache dataSourceCache = new DriverDataSourceCache(); DataSource dataSource = dataSourceCache.get("jdbc:shardingsphere:classpath:sharding-config.yaml"); return dataSource; } table:kq_attendance_daily(kq_attendance_daily_clock is similar with kq_attendance_daily) CREATE TABLE `kq_attendance_daily` ( `id` bigint NOT NULL AUTO_INCREMENT COMMENT 'id', `organ_id` bigint NULL DEFAULT NULL COMMENT '项目id', `employee_id` bigint NOT NULL COMMENT '员工ID', `dept_id` bigint NULL DEFAULT NULL COMMENT '部门id', `post_id` bigint NULL DEFAULT NULL COMMENT '职位id', PRIMARY KEY (`id`) USING BTREE ) ENGINE = InnoDB AUTO_INCREMENT = 1553387 CHARACTER SET = utf8mb3 COLLATE = utf8mb3_general_ci COMMENT = '员工考勤日报' ROW_FORMAT = DYNAMIC; -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
