Thanks for all the valuable feedback, helm charts / templates sound good to
me too. It will bring a lot of convenience to the production deployment of
Sql-Gateway, looking forward to this, thanks
Best,
Shammon FY
On Wed, Sep 20, 2023 at 10:01 AM Yangze Guo wrote:
> Thanks for the reply, Gyula.
Thanks for the reply, Gyula. Also thanks for the input from Thomas and Dongwoo.
Helm charts / templates for creating SQL Gateway deployment / services
sounds good to me. I'll work on it and also update that to the OLAP
quickstart doc.
Best,
Yangze Guo
On Tue, Sep 19, 2023 at 11:46 PM Dongwoo Kim
A simple Helm chart that covers both scenarios - running gateway
1) as a sidecar, 2) as an independent deployment- with solid docs sounds
good to me.
It would be nice if we could also include optional features in the chart
such as k8s service for exposing the sql gateway.
Best regards,
Dongwoo
20
It is already possible to bring up a SQL Gateway as a sidecar utilizing the
pod templates - I tend to also see this more of a documentation/example
issue rather than something that calls for a separate CRD or other
dedicated operator support.
Thanks,
Thomas
On Tue, Sep 19, 2023 at 3:41 PM Gyula
Based on this I think we should start with simple Helm charts / templates
for creating the `FlinkDeployment` together with a separate Deployment for
the SQL Gateway.
If the gateway itself doesn't integrate well with the operator managed CRs
(sessionjobs) then I think it's better and simpler to have
Thanks for the reply, @Gyula.
I would like to first provide more context on OLAP scenarios. In OLAP
scenarios, users typically submit multiple short batch jobs that have
execution times typically measured in seconds or even sub-seconds.
Additionally, due to the lightweight nature of these jobs, th
As I wrote in my previous answer, this could be done as a helm chart or as
part of the operator easily. Both would work.
My main concern for adding this into the operator is that the SQL Gateway
itself is not properly integrated with the Operator Custom resources.
Gyula
On Mon, Sep 18, 2023 at 4:
Thanks @Gyula, I would like to share our use of sql-gateway with the Flink
session cluster and I hope that it could help you to have a clearer
understanding of our needs :)
As @Yangze mentioned, currently we use flink as an olap platform by the
following steps
1. Setup a flink session cluster by f
Hi!
It sounds pretty easy to deploy the gateway automatically with session
cluster deployments from the operator , but there is a major limitation
currently. The SQL gateway itself doesn't really support any operator
integration so jobs submitted through the SQL gateway would not be
manageable by t
> There would be many different ways of doing this. One gateway per session
> cluster, one gateway shared across different clusters...
Currently, sql gateway cannot be shared across multiple clusters.
> understand the tradeoff and the simplest way of accomplishing this.
I'm not familiar with th
There would be many different ways of doing this. One gateway per session
cluster, one gateway shared across different clusters...
I would not rush to add anything anywhere until we understand the tradeoff
and the simplest way of accomplishing this.
The operator already supports ingresses for sess
Thanks for bringing this up, Dongwoo. Flink SQL Gateway is also a key
component for OLAP scenarios.
@Gyula
How about add sql gateway as an optional component to Session Cluster
Deployments. User can specify the resource / instance number and ports
of the sql gateway. I think that would help a lot
If we start from the crd direction, I think this mode is more like a
sidecar of the session cluster, which is submitted to the session cluster
by sending sql commands to the sql gateway. I don't know if my statement is
accurate.
Xiaolong Wang 于2023年9月15日周五 13:27写道:
> Hi, Dongwoo,
>
> Since Flink
Hi, Dongwoo,
Since Flink SQL gateway should run upon a Flink session cluster, I think
it'd be easier to add more fields to the CRD of `FlinkSessionJob`.
e.g.
apiVersion: flink.apache.org/v1beta1
kind: FlinkSessionJob
metadata:
name: sql-gateway
spec:
sqlGateway:
endpoint: "hiveserver2"
Hi all,
*@Gyula*
Thanks for the consideration Gyula. My initial idea for the CR was roughly
like below.
I focused on simplifying the setup in k8s environment, but I agree with
your opinion that for the sql gateway
we don't need custom operator logic to handle and most of the requirements
can be me
Hi, Shammon,
Yes, I want to create a Flink SQL-gateway in a job-manager.
Currently, the above script is generally a work-around and allows me to
start a Flink session job manager with a SQL gateway running upon.
I agree that it'd be more elegant that we create a new job type and write a
script,
Hi,
Currently `sql-gateway` can be started with the script `sql-gateway.sh` in
an existing node, it is more like a simple "standalone" node. I think it's
valuable if we can do more work to start it in k8s.
For xiaolong:
Do you want to start a sql-gateway instance in the jobmanager pod? I think
ma
Hi, I've experiment this feature on K8S recently, here is some of my trial:
1. Create a new kubernetes-jobmanager.sh script with the following content
#!/usr/bin/env bash
$FLINK_HOME/bin/sql-gateway.sh start
$FLINK_HOME/bin/kubernetes-jobmanager1.sh kubernetes-session
2. Build your own Flink do
Hi!
I don't completely understand what would be a content of such CRD, could
you give a minimal example how the Flink SQL Gateway CR yaml would look
like?
Adding a CRD would mean you need to add some operator/controller logic as
well. Why not simply use a Deployment / StatefulSet in Kubernetes?
Hi all,
I've been working on setting up a flink SQL gateway in a k8s environment
and it got me thinking — what if we had a CRD for this?
So I have quick questions below.
1. Is there ongoing work to create a CRD for the Flink SQL Gateway?
2. If not, would the community be open to considering a CRD
20 matches
Mail list logo