[jira] [Assigned] (FLINK-30219) Fetch results api in sql gateway return error result.

2022-11-26 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu reassigned FLINK-30219:
--

Assignee: Aiden Gong

> Fetch results api in sql gateway return error result.
> -
>
> Key: FLINK-30219
> URL: https://issues.apache.org/jira/browse/FLINK-30219
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Gateway
>Affects Versions: 1.16.0
>Reporter: Aiden Gong
>Assignee: Aiden Gong
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.16.1
>
> Attachments: image-2022-11-26-10-38-02-270.png
>
>
> !image-2022-11-26-10-38-02-270.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] wanglijie95 commented on pull request #20097: [FLINK-28284][Connectors/Jdbc] Add JdbcSink with new format

2022-11-26 Thread GitBox


wanglijie95 commented on PR #20097:
URL: https://github.com/apache/flink/pull/20097#issuecomment-1328158004

   Hi @eskabetxe, thanks for your reply.
   1. I think the purpose of providing three sink interfaces(Sink, 
StatefulSink, TwoPhaseCommittingSink) is to facilitate developers/users to 
inherit different ones according to their needs, so I prefer to the 2 sinks 
approach, especially we are likely to introduce only the non-xa sink in this pr.
   2. I don't see the actual need to introduce JdbcProducer abstraction at 
present, maybe it's better to do it later when it is really needed (we can 
decide which interfaces/methods need to be abstracted at that time). Even if 
the JdbcProducer is introduced, I think the NonXaJdbcProducer should resue the 
JdbcOutputFormat internally, which will make it easier to fix bugs or add 
feature options later (we don't need to modify in two places).
   3. In view of the current blocking(the RuntimeContext) of xa sink migration 
I think the jdbc sink migration can be disassembled into 2 subtasks (xa and 
non-xa), and we only do the non-xa sink in this ticket/pr. 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] klion26 closed pull request #20903: [FLINK-29402][state backends] Add RocksDB options for DirectIO reads …

2022-11-26 Thread GitBox


klion26 closed pull request #20903: [FLINK-29402][state backends] Add RocksDB 
options for DirectIO reads …
URL: https://github.com/apache/flink/pull/20903


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] Myasuka commented on a diff in pull request #21362: [FLINK-29430] Add sanity check when setCurrentKeyGroupIndex

2022-11-26 Thread GitBox


Myasuka commented on code in PR #21362:
URL: https://github.com/apache/flink/pull/21362#discussion_r1032801916


##
flink-runtime/src/main/java/org/apache/flink/runtime/state/heap/InternalKeyContextImpl.java:
##
@@ -72,6 +73,10 @@ public void setCurrentKey(@Nonnull K currentKey) {
 
 @Override
 public void setCurrentKeyGroupIndex(int currentKeyGroupIndex) {
+if (!keyGroupRange.contains(currentKeyGroupIndex)) {
+throw KeyGroupRangeOffsets.newIllegalKeyGroupException(
+currentKeyGroupIndex, keyGroupRange);
+}

Review Comment:
   BTW, apart from changing the base class, we can also only change the 
`RocksDBKeyedStateBackend#setCurrentKey` to avoid the impact on 
HashMapStateBackend, as the HashMapStateBackend will always check the key group 
when accessing state tables.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (FLINK-30158) [Flink SQL][Protobuf] NullPointerException when querying Kafka topic using repeated or map attributes

2022-11-26 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-30158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17639340#comment-17639340
 ] 

Benchao Li commented on FLINK-30158:


[~jamesmcguirepro] Currently Protobuf format requires users to provide the 
schema (Flink schema), and make sure it is aligned with the actual schema 
(protobuf schema). Could you provide us the full DDL you used for this case?

> [Flink SQL][Protobuf] NullPointerException when querying Kafka topic using 
> repeated or map attributes
> -
>
> Key: FLINK-30158
> URL: https://issues.apache.org/jira/browse/FLINK-30158
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Ecosystem
>Affects Versions: 1.16.0
>Reporter: James Mcguire
>Priority: Major
>
> I am encountering a {{java.lang.NullPointerException}} exception when trying 
> to use Flink SQL to query a kafka topic that uses either {{repeated}} and/or 
> {{map}} attributes.
>  
> {*}{*}{*}Replication{*} *steps*
>  # Use a protobuf definition that either uses repeated and/or map.  This 
> protobuf schema should cover a few of the problematic scenarios I ran into:
>  
> {code:java}
> syntax = "proto3";
> package example.message;
> option java_package = "com.example.message";
> option java_multiple_files = true;
> message NestedType {
>   int64 nested_first = 1;
>   oneof nested_second {
> int64 one_of_first = 2;
> string one_of_second = 3;
>   }
> }
> message Test {
>   repeated int64 first = 1;
>   map second = 2;
> } {code}
> 2. Attempt query on topic, even excluding problematic columns:
>  
> {code:java}
> [ERROR] Could not execute SQL statement. Reason:
> org.apache.flink.formats.protobuf.PbCodegenException: 
> java.lang.NullPointerException{code}
>  
>  
> log file:
>  
> {code:java}
> 2022-11-22 15:33:59,510 WARN  org.apache.flink.table.client.cli.CliClient 
>  [] - Could not execute SQL 
> statement.org.apache.flink.table.client.gateway.SqlExecutionException: Error 
> while retrieving result.at 
> org.apache.flink.table.client.gateway.local.result.CollectResultBase$ResultRetrievalThread.run(CollectResultBase.java:79)
>  ~[flink-sql-client-1.16.0.jar:1.16.0]Caused by: java.lang.RuntimeException: 
> Failed to fetch next resultat 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109)
>  ~[flink-dist-1.16.0.jar:1.16.0]at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
>  ~[flink-dist-1.16.0.jar:1.16.0]at 
> org.apache.flink.table.planner.connectors.CollectDynamicSink$CloseableRowIteratorWrapper.hasNext(CollectDynamicSink.java:222)
>  ~[?:?]at 
> org.apache.flink.table.client.gateway.local.result.CollectResultBase$ResultRetrievalThread.run(CollectResultBase.java:75)
>  ~[flink-sql-client-1.16.0.jar:1.16.0]Caused by: java.io.IOException: Failed 
> to fetch job execution resultat 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.getAccumulatorResults(CollectResultFetcher.java:184)
>  ~[flink-dist-1.16.0.jar:1.16.0]at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:121)
>  ~[flink-dist-1.16.0.jar:1.16.0]at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:106)
>  ~[flink-dist-1.16.0.jar:1.16.0]at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
>  ~[flink-dist-1.16.0.jar:1.16.0]at 
> org.apache.flink.table.planner.connectors.CollectDynamicSink$CloseableRowIteratorWrapper.hasNext(CollectDynamicSink.java:222)
>  ~[?:?]at 
> org.apache.flink.table.client.gateway.local.result.CollectResultBase$ResultRetrievalThread.run(CollectResultBase.java:75)
>  ~[flink-sql-client-1.16.0.jar:1.16.0]Caused by: 
> java.util.concurrent.ExecutionException: 
> org.apache.flink.client.program.ProgramInvocationException: Job failed 
> (JobID: bc869097009a92d0601add881a6b920c)at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395) 
> ~[?:?]at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2022) 
> ~[?:?]at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.getAccumulatorResults(CollectResultFetcher.java:182)
>  ~[flink-dist-1.16.0.jar:1.16.0]at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:121)
>  ~[flink-dist-1.16.0.jar:1.16.0]at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(Collect

[GitHub] [flink] 1996fanrui commented on a diff in pull request #20137: Just for CI

2022-11-26 Thread GitBox


1996fanrui commented on code in PR #20137:
URL: https://github.com/apache/flink/pull/20137#discussion_r1032795523


##
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/channel/ChannelStateWriteRequestExecutorImpl.java:
##
@@ -125,11 +162,72 @@ public void start() throws IllegalStateException {
 
 @Override
 public void submit(ChannelStateWriteRequest request) throws Exception {
-submitInternal(request, () -> deque.add(request));
+BlockingQueue unreadyQueue =
+unreadyQueues.get(
+SubtaskID.of(
+request.getJobID(),
+request.getJobVertexID(),
+request.getSubtaskIndex()));
+checkState(unreadyQueue != null, "The subtask %s is not yet 
registered");
+submitInternal(
+request,
+() -> {
+synchronized (unreadyQueue) {
+// 1. unreadyQueue isn't empty, the new request must 
keep the order, so add
+// the new request to the unreadyQueue tail.
+if (!unreadyQueue.isEmpty()) {
+unreadyQueue.add(request);
+return;
+}
+// 2. unreadyQueue is empty, and new request is ready, 
so add it to the
+// readyQueue.
+if (request.getReadyFuture().isDone()) {
+deque.add(request);
+return;
+}
+// 3. unreadyQueue is empty, and new request isn't 
ready, so add it to the
+// readyQueue,
+// and register it as the first request.
+unreadyQueue.add(request);
+registerFirstRequestFuture(request, unreadyQueue);
+}
+});
+}
+
+private void registerFirstRequestFuture(
+@Nonnull ChannelStateWriteRequest firstRequest,
+BlockingQueue unreadyQueue) {
+assert Thread.holdsLock(unreadyQueue);
+checkState(firstRequest == unreadyQueue.peek(), "The request isn't the 
first request.");
+firstRequest
+.getReadyFuture()
+.thenAccept(o -> moveReadyRequestToReadyQueue(unreadyQueue, 
firstRequest))
+.exceptionally(
+throwable -> {
+moveReadyRequestToReadyQueue(unreadyQueue, 
firstRequest);
+return null;
+});

Review Comment:
   When dataFuture is completed, just move the request to readyQueue. 
   
   And if the dataFuture isCompletedExceptionally, the `writer.fail` will be 
called later. You can take a look 
`ChannelStateWriteRequest#buildFutureWriteRequest.`, so I don't think the 
exception needs be handled here.
   
   ```
   static ChannelStateWriteRequest buildFutureWriteRequest(
   JobID jobID,
   JobVertexID jobVertexID,
   int subtaskIndex,
   long checkpointId,
   String name,
   CompletableFuture> dataFuture,
   BiConsumer bufferConsumer) {
   return new CheckpointInProgressRequest(
   name,
   jobID,
   jobVertexID,
   subtaskIndex,
   checkpointId,
   writer -> {
   checkState(
   dataFuture.isDone(), "It should be executed when 
dataFuture is done.");
   List buffers;
   try {
   buffers = dataFuture.get();
   } catch (ExecutionException e) {
   // If dataFuture fails, fail only the single related 
writer
   writer.fail(jobID, jobVertexID, subtaskIndex, e);
   return;
   }
   for (Buffer buffer : buffers) {
   checkBufferIsBuffer(buffer);
   bufferConsumer.accept(writer, buffer);
   }
   },
   throwable ->
   dataFuture.thenAccept(
   buffers -> {
   try {
   CloseableIterator.fromList(buffers, 
Buffer::recycleBuffer)
   .close();
   } catch (Exception e) {
   LOG.error(
   "Failed to recycle the output 
buffer of channel state.",
   e);
   }
   }),
   dataFuture);
   }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to

[GitHub] [flink] 1996fanrui commented on a diff in pull request #20137: Just for CI

2022-11-26 Thread GitBox


1996fanrui commented on code in PR #20137:
URL: https://github.com/apache/flink/pull/20137#discussion_r1032794990


##
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/channel/ChannelStateWriteRequestExecutorImpl.java:
##
@@ -51,27 +64,46 @@ class ChannelStateWriteRequestExecutorImpl implements 
ChannelStateWriteRequestEx
 private final Thread thread;
 private volatile Exception thrown = null;
 private volatile boolean wasClosed = false;
-private final String taskName;
+
+private final Map> 
unreadyQueues =
+new ConcurrentHashMap<>();
+
+private final JobID jobID;
+private final Set subtasks;
+private final AtomicBoolean isRegistering = new AtomicBoolean(true);

Review Comment:
   It's used in `ChannelStateWriteRequestExecutorFactory#getOrCreateExecutor`.
   
   When the executor has 5 tasks, the  `isRegistering` will be changed to 
false. Factory will create a new Executor later.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #21402: Name defination suggestion for this doc

2022-11-26 Thread GitBox


flinkbot commented on PR #21402:
URL: https://github.com/apache/flink/pull/21402#issuecomment-1328035844

   
   ## CI report:
   
   * 615b75b423f1a2f30b0666cb380e3d6563e50250 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] HarryUlysses opened a new pull request, #21402: Name defination suggestion for this doc

2022-11-26 Thread GitBox


HarryUlysses opened a new pull request, #21402:
URL: https://github.com/apache/flink/pull/21402

   We should keep name of variable defination style as same.  On this doc, you 
use  for azure adls gen2 storage container name. However, you 
add more charactor $ for storage name($). I think it is 
unreasonable and it will cause the confusion for the reader. Therefore, I 
recommend to remove this charactor($) of the storage number for keeping same 
name defination style.
   
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-hbase] ferenc-csaky commented on a diff in pull request #1: [FLINK-30062][Connectors/HBase] Externalize the existing connector code as is

2022-11-26 Thread GitBox


ferenc-csaky commented on code in PR #1:
URL: 
https://github.com/apache/flink-connector-hbase/pull/1#discussion_r1032778467


##
flink-connector-hbase-2.2/pom.xml:
##
@@ -0,0 +1,431 @@
+
+
+http://maven.apache.org/POM/4.0.0";
+xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/maven-v4_0_0.xsd";>
+
+   4.0.0
+
+   
+   org.apache.flink
+   flink-connector-hbase-parent
+   1.0-SNAPSHOT
+   
+
+   flink-connector-hbase-2.2
+   Flink : Connectors : HBase 2.2
+   jar
+
+   
+   
+   
+   org.apache.flink
+   flink-core
+   provided
+   
+
+   
+   org.apache.flink
+   flink-streaming-java
+   provided
+   
+
+   
+   
+   org.apache.flink
+   flink-connector-hbase-base
+   ${project.version}
+   
+   
+   org.apache.hbase
+   hbase-client
+   
+   
+   org.apache.hbase
+   hbase-server
+   
+   
+   
+
+   
+
+   
+   
+   org.apache.flink
+   flink-table-api-java-bridge
+   provided
+   true
+   
+
+   
+   
+   org.apache.hadoop
+   hadoop-common
+   provided
+   
+   
+   log4j
+   log4j
+   
+   
+   org.slf4j
+   slf4j-log4j12
+   
+   
+   
+
+   
+   org.apache.hbase
+   hbase-client
+   
+   
+   
+   org.mortbay.jetty
+   jetty-util
+   
+   
+   org.mortbay.jetty
+   jetty
+   
+   
+   org.mortbay.jetty
+   jetty-sslengine
+   
+   
+   org.mortbay.jetty
+   jsp-2.1
+   
+   
+   org.mortbay.jetty
+   jsp-api-2.1
+   
+   
+   org.mortbay.jetty
+   servlet-api-2.5
+   
+   
+   
+   org.apache.hbase
+   
hbase-annotations
+   
+   
+   com.sun.jersey
+   jersey-core
+   
+   
+   org.apache.hadoop
+   hadoop-common
+   
+   
+   org.apache.hadoop
+   hadoop-auth
+   
+   
+   org.apache.hadoop
+   
hadoop-annotations
+   
+   
+   org.apache.hadoop
+   
hadoop-mapreduce-client-core
+   
+   
+   org.apache.hadoop
+   hadoop-client
+   
+   
+   org.apache.hadoop
+   hadoop-hdfs

[GitHub] [flink-connector-hbase] ferenc-csaky commented on pull request #1: [FLINK-30062][Connectors/HBase] Externalize the existing connector code as is

2022-11-26 Thread GitBox


ferenc-csaky commented on PR #1:
URL: 
https://github.com/apache/flink-connector-hbase/pull/1#issuecomment-1328032879

   Messed up the rebase first time, so this one got closed cause the init 
commit was on the top. That is fixed now and the comments added also addressed, 
pls. check #2.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-hbase] ferenc-csaky opened a new pull request, #2: [FLINK-30062][Connectors/HBase] Adapt connector code to external repo

2022-11-26 Thread GitBox


ferenc-csaky opened a new pull request, #2:
URL: https://github.com/apache/flink-connector-hbase/pull/2

   I moved the business code as is. Some changes were made:
   
   * Maven dependency structure. I tried to move common things under 
`dependencyManagement` to use the same version.
   * I added a default `2.8.5` Hadoop version in both `ITCase` setup validation 
checks to trigger the test, cause in its current format it checks an 
environment variable, which is not set locally.
   
   Also addressed the comments on #1.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink-connector-hbase] ferenc-csaky closed pull request #1: [FLINK-30062][Connectors/HBase] Externalize the existing connector code as is

2022-11-26 Thread GitBox


ferenc-csaky closed pull request #1: [FLINK-30062][Connectors/HBase] 
Externalize the existing connector code as is
URL: https://github.com/apache/flink-connector-hbase/pull/1


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #21401: [FLINK-29718][table] Supports hive sum function by native implementation

2022-11-26 Thread GitBox


flinkbot commented on PR #21401:
URL: https://github.com/apache/flink/pull/21401#issuecomment-1328018740

   
   ## CI report:
   
   * f8760a5a19f8a7acf6e6718648a2afd3b1977fb6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-29718) Supports hive sum function by native implementation

2022-11-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-29718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-29718:
---
Labels: pull-request-available  (was: )

> Supports hive sum function by native implementation
> ---
>
> Key: FLINK-29718
> URL: https://issues.apache.org/jira/browse/FLINK-29718
> Project: Flink
>  Issue Type: Sub-task
>Reporter: dalongliu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.17.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] lsyldliu opened a new pull request, #21401: [FLINK-29718][table] Supports hive sum function by native implementation

2022-11-26 Thread GitBox


lsyldliu opened a new pull request, #21401:
URL: https://github.com/apache/flink/pull/21401

   
   ## What is the purpose of the change
   
   Supports hive sum function by native implementation
   
   
   ## Brief change log
   
 - *Supports hive sum function by native implementation*
   
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
 - *Added integration tests in HiveDialectAggITCase*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): ( no )
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (JavaDocs)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (FLINK-30221) Fix the bug of sum(try_cast(string as bigint)) return null when partial elements can't convert to bigint

2022-11-26 Thread dalongliu (Jira)
dalongliu created FLINK-30221:
-

 Summary: Fix the bug of sum(try_cast(string as bigint)) return 
null when partial elements can't convert to bigint
 Key: FLINK-30221
 URL: https://issues.apache.org/jira/browse/FLINK-30221
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API, Table SQL / Runtime
Affects Versions: 1.17.0
Reporter: dalongliu






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (FLINK-30220) Hide user credentials in Flink SQL JDBC connector

2022-11-26 Thread Jun Qin (Jira)
Jun Qin created FLINK-30220:
---

 Summary: Hide user credentials in Flink SQL JDBC connector
 Key: FLINK-30220
 URL: https://issues.apache.org/jira/browse/FLINK-30220
 Project: Flink
  Issue Type: Improvement
Reporter: Jun Qin


Similar to FLINK-28028, when using Flink SQL JDBC connector, we should also 
have a way to secure the username and the password used in the DDL:
{code:java}
CREATE TABLE MyUserTable (
  id BIGINT,
  name STRING,
  age INT,
  status BOOLEAN,
  PRIMARY KEY (id) NOT ENFORCED
) WITH (
   'connector' = 'jdbc',
   'url' = 'jdbc:mysql://localhost:3306/mydatabase',
   'table-name' = 'users',
   'username' = 'a-username',
   'password' = 'a-password'
);
 {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (FLINK-30220) Secure user credentials in Flink SQL JDBC connector

2022-11-26 Thread Jun Qin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Qin updated FLINK-30220:

Summary: Secure user credentials in Flink SQL JDBC connector  (was: Hide 
user credentials in Flink SQL JDBC connector)

> Secure user credentials in Flink SQL JDBC connector
> ---
>
> Key: FLINK-30220
> URL: https://issues.apache.org/jira/browse/FLINK-30220
> Project: Flink
>  Issue Type: Improvement
>Reporter: Jun Qin
>Priority: Major
>
> Similar to FLINK-28028, when using Flink SQL JDBC connector, we should also 
> have a way to secure the username and the password used in the DDL:
> {code:java}
> CREATE TABLE MyUserTable (
>   id BIGINT,
>   name STRING,
>   age INT,
>   status BOOLEAN,
>   PRIMARY KEY (id) NOT ENFORCED
> ) WITH (
>'connector' = 'jdbc',
>'url' = 'jdbc:mysql://localhost:3306/mydatabase',
>'table-name' = 'users',
>'username' = 'a-username',
>'password' = 'a-password'
> );
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] flinkbot commented on pull request #21400: [FLINK-30219] Fetch results api in sql gateway return error result

2022-11-26 Thread GitBox


flinkbot commented on PR #21400:
URL: https://github.com/apache/flink/pull/21400#issuecomment-1328004247

   
   ## CI report:
   
   * ea65940d6768d658b74a63243db6dac077e975a9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (FLINK-30219) Fetch results api in sql gateway return error result.

2022-11-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-30219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-30219:
---
Labels: pull-request-available  (was: )

> Fetch results api in sql gateway return error result.
> -
>
> Key: FLINK-30219
> URL: https://issues.apache.org/jira/browse/FLINK-30219
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Gateway
>Affects Versions: 1.16.0
>Reporter: Aiden Gong
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.16.1
>
> Attachments: image-2022-11-26-10-38-02-270.png
>
>
> !image-2022-11-26-10-38-02-270.png!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [flink] GOODBOY008 opened a new pull request, #21400: [FLINK-30219] Fetch results api in sql gateway return error result

2022-11-26 Thread GitBox


GOODBOY008 opened a new pull request, #21400:
URL: https://github.com/apache/flink/pull/21400

   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   Fix issue : [FLINK-30219] Fetch results api in sql gateway return error 
result.
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the 
conventions defined in our code quality guide: 
https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluster with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [flink] gaborgsomogyi commented on pull request #20875: [FLINK-29363][runtime-web] allow fully redirection in web dashboard

2022-11-26 Thread GitBox


gaborgsomogyi commented on PR #20875:
URL: https://github.com/apache/flink/pull/20875#issuecomment-1328003317

   @rmetzger now it's all green and good to go :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org