Whojohn opened a new issue, #6869:
URL: https://github.com/apache/kyuubi/issues/6869

   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   
   
   ### Search before asking
   
   - [X] I have searched in the 
[issues](https://github.com/apache/kyuubi/issues?q=is%3Aissue) and found no 
similar issues.
   
   
   ### Describe the bug
   
   # Which case i try 
   ## Problem in beeline sitiation
   
   ```
   ./bin/beeline  --incremental=true --verbose=true --incrementalBufferRows=1 
--outputformat=table -u 
'jdbc:hive2://dev:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=kyuubi;#kyuubi.engine.type=FLINK_SQL'
   
   select sum(a),count(b) from sou
   +--------------+---------+------------+
   |    EXPR$0    | EXPR$1  | op_inside  |
   +--------------+---------+------------+
   | -1861035379  | 1       | +I         |
   | -1861035379  | 1       | -U         |
   | -317406356  | 2       | +U         |
   | -317406356  | 2       | -U         |
   | -684766290  | 3       | +U         |
   | -684766290  | 3       | -U         |
   | -15267534  | 4       | +U         |
   | -15267534  | 4       | -U         |
   | 2073227048  | 5       | +U         |
   | 2073227048  | 5       | -U         |
   | -734011806  | 6       | +U         |
   | -734011806  | 6       | -U         |
   | 786777747  | 7       | +U         |
   | 786777747  | 7       | -U         |
   | -707027945  | 8       | +U         |
   Interrupting... Please be patient this may take some time.
   Interrupting... Please be patient this may take some time.
   Interrupting... Please be patient this may take some time.
   Interrupting... Please be patient this may take some time.
   Interrupting... Please be patient this may take some time.
   Interrupting... Please be patient this may take some time.
   Interrupting... Please be patient this may take some time.
   | -707027945  | 8       | -U         |
   | -681694912  | 9       | +U         |
   | -681694912  | 9       | -U         |
   | 950626509  | 10      | +U         |
   | 950626509  | 10      | -U         |
   | -1337848862  | 11      | +U         |
   | -1337848862  | 11      | -U         |
   | 2077867031  | 12      | +U         |
   | 2077867031  | 12      | -U         |
   | 1993739972  | 13      | +U         |
   | 1993739972  | 13      | -U         |
   | 1204146208  | 14      | +U         |
   | 1204146208  | 14      | -U         |
   
   ------------- no matter how many ctrl-c send , it will stop  until 
kyuubi.session.engine.flink.max.rows is reach.
   --
   
   ```
   
   
   ## problem in KyuubiHiveDriver jdbc 
   
   - my code as follwing using
   ``` 
   connection = 
DriverManager.getConnection("jdbc:hive2://localhost:10009/#kyuubi.engine.type=FLINK_SQL",
 null,
                   null);
           statement = connection.createStatement();
           statement.setFetchSize(1);
           statement.execute("create  table sou(a int,b string) with 
('connector' = 'datagen','rows-per-second' = '20')");
           resultSet = statement.executeQuery("select sum(a),count(a) from 
sou");
           int a = 0;
           while (resultSet.next()) {
               a += 1;
               if (a > 100) {
                   // !!! notice   statement.cancel() will not stop flink job ;
                   statement.cancel();
                   break;
               }
           }
           // do more querying ;
   ```
   - flink sql cluster way(no matter how may cancel, it will still running util 
another query is send.)
   Flink SQL> show jobs; 
   
+----------------------------------+----------+----------+-------------------------+
   |                           job id | job name |   status |              
start time |
   
+----------------------------------+----------+----------+-------------------------+
   | a91f330664aa725067ff641cd3fd6651 |  collect | CANCELED | 
2024-12-26T06:00:33.033 |
   | f7a5538b8e510ccafa47830f982ed724 |  collect | CANCELED | 
2024-12-26T03:39:15.653 |
   | 85dca8c9ab2068f1d48f63add195882c |  collect |  RUNNING | 
2024-12-26T06:01:18.028 |
   
+----------------------------------+----------+----------+-------------------------+
   3 rows in set
   
   
   ## Souce code readed
   1. I found any server/engine/flink log i can not see any log about cacnel 
receive .
   2. Debug in beeline or jdbc , client send KyuubiStatement#cancle is ok ,but 
remote bug in AbstractBackendService#cancelOperation not receive and singal. 
(debug in engine)
   > both beeline and jdbc call KyuubiStatement#cancel to cancel.( guuest from 
code)
   
   ps : if anybody help me how to debug in thrift cancel calling rpc , sumbit a 
pr is my pleasure; Forgive my poor English.
   
   ### Affects Version(s)
   
   1.9.3
   
   ### Kyuubi Server Log Output
   
   _No response_
   
   ### Kyuubi Engine Log Output
   
   _No response_
   
   ### Kyuubi Server Configurations
   
   ```yaml
   export FLINK_HOME=/data/flink-1.17.2
   export 
FLINK_HADOOP_CLASSPATH=/data/flink-1.17.2/lib/flink-shaded-hadoop-2-uber-2.10.2-10.0.jar
   export JAVA_HOME="/data/jdk-11.0.25+9"
   ```
   
   
   ### Kyuubi Engine Configurations
   
   ```yaml
   kyuubi.server.thrift.resultset.default.fetch.size    1
   kyuubi.engine.jdbc.fetch.size                        1
   hive.server2.thrift.resultset.default.fetch.size     1
   kyuubi.operation.interrupt.on.cancel                 true
   ```
   
   
   ### Additional context
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [ ] Yes. I would be willing to submit a PR with guidance from the Kyuubi 
community to fix.
   - [X] No. I cannot submit a PR at this time.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to