Good morning Alexis,
Something like this we do all the time.
Read and existing savepoint, copy some of the not to be changed operator states
(keyed/non-keyed) over, and process/patch the remaining ones by transforming
and bootstrapping to new state.
I could spare more details for more specific
Hi team, Thanks for your quick response.
I have an inquiry regarding file processing in the event of a job restart.
When the job is restarted, we encounter challenges in tracking which files
have been processed and which remain pending. Is there a method to
seamlessly resume processing files from w
Congratulations and thanks release managers and everyone who has
contributed!
Best,
Jark
On Fri, 27 Oct 2023 at 12:25, Hang Ruan wrote:
> Congratulations!
>
> Best,
> Hang
>
> Samrat Deb 于2023年10月27日周五 11:50写道:
>
> > Congratulations on the great release
> >
> > Bests,
> > Samrat
> >
> > On Fri
Congratulations!
Best,
Hang
Samrat Deb 于2023年10月27日周五 11:50写道:
> Congratulations on the great release
>
> Bests,
> Samrat
>
> On Fri, 27 Oct 2023 at 7:59 AM, Yangze Guo wrote:
>
> > Great work! Congratulations to everyone involved!
> >
> > Best,
> > Yangze Guo
> >
> > On Fri, Oct 27, 2023 at 1
Great work! Congratulations to everyone involved!
Best,
Yangze Guo
On Fri, Oct 27, 2023 at 10:23 AM Qingsheng Ren wrote:
>
> Congratulations and big THANK YOU to everyone helping with this release!
>
> Best,
> Qingsheng
>
> On Fri, Oct 27, 2023 at 10:18 AM Benchao Li wrote:
>>
>> Great work, th
Congratulations and big THANK YOU to everyone helping with this release!
Best,
Qingsheng
On Fri, Oct 27, 2023 at 10:18 AM Benchao Li wrote:
> Great work, thanks everyone involved!
>
> Rui Fan <1996fan...@gmail.com> 于2023年10月27日周五 10:16写道:
> >
> > Thanks for the great work!
> >
> > Best,
> > Rui
Great work, thanks everyone involved!
Rui Fan <1996fan...@gmail.com> 于2023年10月27日周五 10:16写道:
>
> Thanks for the great work!
>
> Best,
> Rui
>
> On Fri, Oct 27, 2023 at 10:03 AM Paul Lam wrote:
>
> > Finally! Thanks to all!
> >
> > Best,
> > Paul Lam
> >
> > > 2023年10月27日 03:58,Alexander Fedulov
Thanks for the great work!
Best,
Rui
On Fri, Oct 27, 2023 at 10:03 AM Paul Lam wrote:
> Finally! Thanks to all!
>
> Best,
> Paul Lam
>
> > 2023年10月27日 03:58,Alexander Fedulov 写道:
> >
> > Great work, thanks everyone!
> >
> > Best,
> > Alexander
> >
> > On Thu, 26 Oct 2023 at 21:15, Martijn Viss
Yeah agree, not a problem in general. But it just seems odd. Returning true if
a fileName can be null will blow up a lot more in the reader as far as my
understanding goes.
I just want to understand whether this is an erroneous condition or an actual
use case. Lets say is it possible to get a n
Finally! Thanks to all!
Best,
Paul Lam
> 2023年10月27日 03:58,Alexander Fedulov 写道:
>
> Great work, thanks everyone!
>
> Best,
> Alexander
>
> On Thu, 26 Oct 2023 at 21:15, Martijn Visser
> wrote:
>
>> Thank you all who have contributed!
>>
>> Op do 26 okt 2023 om 18:41 schreef Feng Jin
>>
Great work, thanks everyone!
Best,
Ron
Alexander Fedulov 于2023年10月27日周五 04:00写道:
> Great work, thanks everyone!
>
> Best,
> Alexander
>
> On Thu, 26 Oct 2023 at 21:15, Martijn Visser
> wrote:
>
> > Thank you all who have contributed!
> >
> > Op do 26 okt 2023 om 18:41 schreef Feng Jin
> >
> >
Hi Arjun,
Flink's FileSource doesnt move or delete the files as of now. It will keep the
files as is and remember the name of the file read in checkpointed state to
ensure it doesnt read the same file twice.
Flink's source API works in a way that single Enumerator operates on the
JobManager. Th
Great work, thanks everyone!
Best,
Alexander
On Thu, 26 Oct 2023 at 21:15, Martijn Visser
wrote:
> Thank you all who have contributed!
>
> Op do 26 okt 2023 om 18:41 schreef Feng Jin
>
> > Thanks for the great work! Congratulations
> >
> >
> > Best,
> > Feng Jin
> >
> > On Fri, Oct 27, 2023 at
* to clarify: by different output I mean that for the same input message
the output message could be slightly smaller due to the abovementioned
factors and fall into the allowed size range without causing any failures
On Thu, 26 Oct 2023 at 21:52, Alexander Fedulov
wrote:
> Your expectations are
Your expectations are correct. In case of AT_LEAST_ONCE Flink will wait
for all outstanding records in the Kafka buffers to be acknowledged before
marking the checkpoint successful (=also recording the offsets of the
sources). That said, there might be other factors involved that could lead
to a d
Is there an actual issue behind this question?
In general: this is a form of defensive programming for a public interface
and the decision here is to be more lenient when facing potentially
erroneous user input rather than blow up the whole application with a
NullPointerException.
Best,
Alexander
Flink's FileSource will enumerate the files and keep track of the progress
in parallel for the individual files. Depending on the format you use, the
progress is tracked at the different level of granularity (TextLine being
the simplest one that tracks the progress based on the number of lines
proc
Thank you all who have contributed!
Op do 26 okt 2023 om 18:41 schreef Feng Jin
> Thanks for the great work! Congratulations
>
>
> Best,
> Feng Jin
>
> On Fri, Oct 27, 2023 at 12:36 AM Leonard Xu wrote:
>
> > Congratulations, Well done!
> >
> > Best,
> > Leonard
> >
> > On Fri, Oct 27, 2023 at
If you use Avro schema you should also store the data in Avro format.
Everything else is going to be a hack.
If you really want to proceed with the hack, you'll either need to use
aliases in your Avro reader schema or change the headers of the CSV file to
comply with the field names in avro.
Best,
Thanks for the great work! Congratulations
Best,
Feng Jin
On Fri, Oct 27, 2023 at 12:36 AM Leonard Xu wrote:
> Congratulations, Well done!
>
> Best,
> Leonard
>
> On Fri, Oct 27, 2023 at 12:23 AM Lincoln Lee
> wrote:
>
> > Thanks for the great work! Congrats all!
> >
> > Best,
> > Lincoln Lee
Congratulations, Well done!
Best,
Leonard
On Fri, Oct 27, 2023 at 12:23 AM Lincoln Lee wrote:
> Thanks for the great work! Congrats all!
>
> Best,
> Lincoln Lee
>
>
> Jing Ge 于2023年10月27日周五 00:16写道:
>
> > The Apache Flink community is very happy to announce the release of
> Apache
> > Flink 1.
Thanks for the great work! Congrats all!
Best,
Lincoln Lee
Jing Ge 于2023年10月27日周五 00:16写道:
> The Apache Flink community is very happy to announce the release of Apache
> Flink 1.18.0, which is the first release for the Apache Flink 1.18 series.
>
> Apache Flink® is an open-source unified strea
The Apache Flink community is very happy to announce the release of Apache
Flink 1.18.0, which is the first release for the Apache Flink 1.18 series.
Apache Flink® is an open-source unified stream and batch data processing
framework for distributed, high-performing, always-available, and accurate
Hi Alexander,
Thanks for reply.
Actually I have a system where data travels in form of user defined, AVRO
schema generated objects.
Sample code:
static void readCsvWithCustomSchemaDecoder(StreamExecutionEnvironment env, Path
dataDirectory) throws Exception {
Class recordClazz = EmployeeTe
Hi Kirti,
What do you mean exactly by "Flink CSV Decoder"? Please provide a snippet
of the code that you are trying to execute.
To be honest, combining CSV with AVRO-generated classes sounds rather
strange and you might want to reconsider your approach.
As for a quick fix, using aliases in your r
Hi Team,
I am using Flink CSV Decoder with AVSC generated java Object and facing issue
if the field name contains underscore(_) or fieldname starts with Capital case.
Sample Schema:
{
"namespace": "avro.employee",
"type": "record",
"name": "EmployeeTest",
"fields": [
{
"name":
Hello,
The documentation of the state processor API has some examples to modify an
existing savepoint by defining a StateBootstrapTransformation. In all
cases, the entrypoint is OperatorTransformation#bootstrapWith, which
expects a DataStream. If I pass an empty DataStream to bootstrapWith and
the
Hello team,
I'm currently in the process of configuring a Flink job. This job entails
reading files from a specified directory and then transmitting the data to
a Kafka sink. I've already successfully designed a Flink job that reads the
file contents in a streaming manner and effectively sends them
hi everyone,
I would like to compile flink-connector-pulsar in my company's OE system
, and he operating environment of oe is k8s??and my flink version is 1.17.2.
But it throwed some error during compilation. The following is the
specific exception information??
[2023-10-26T18:20:30.061][Co
29 matches
Mail list logo