?
>
> I mean ,if it helps, you can check out
> https://www.ververica.com/blog/how-to-write-fast-flink-sql .
>
>
> Regards
>
> On Tue, Jun 25, 2024 at 4:30 PM Ashish Khatkar via user <
> user@flink.apache.org> wrote:
>
>> Hi Xuyang,
>>
>> The i
t; - state.backend.rocksdb.writebuffer.size=x
> - 3. If possible, try left window join for your streams
>-
>- Please, share what sink you are using. Also, the per-operator,
>source and sink throughput, if possible?
>
>
> On Mon, Jun 24, 2024 at 3
Hi all,
We are facing backpressure in the flink sql job from the sink and the
backpressure only comes from a single task. This causes the checkpoint to
fail despite enabling unaligned checkpoints and using debloating buffers.
We enabled flamegraph and the task spends most of the time doing
The additional exceptions with the same error but on different files
Pyflink lib error :
java.lang.RuntimeException: An error occurred while copying the file.
at org.apache.flink.api.common.cache.DistributedCache.getFile(
DistributedCache.java:158)
at
Hi,
We are using flink-1.17.0 table API and RocksDB as backend to provide a
service to our users to run sql queries. The tables are created using the
avro schema and we also provide users to attach python udf as a plugin.
This plugin is downloaded at the time of building the table and we update
nk/flink-docs-master/docs/libs/state_processor_api/
>
> Best,
> Shammon FY
>
>
> On Fri, Mar 17, 2023 at 8:48 PM Ashish Khatkar via user <
> user@flink.apache.org> wrote:
>
>> Hi all,
>>
>> I need help in understanding if we can add columns with defaults,
Hi all,
I need help in understanding if we can add columns with defaults, let's say
NULL to the existing table and recover the job from the savepoint.
We are using flink-1.16.0 table API and RocksDB as backend to provide a
service to our users to run sql queries. The tables are created using the