Hi team,
We encountered the schema forbidden issue during deployment with the
changes of flink version upgrade (1.16.1 -> 1.17.1), hope to get some help
here :pray:.
In the changes of upgrade flink version from 1.16.1 to 1.17.1, we also
switched from our customized schema-registry dependency to
or
Hi team,
I am upgrading our flink version from 1.16 to 1.17.1, and currently facing this
issue, can I get some help? What shall I do for this? Thanks!
org.apache.flink.util.FlinkException: Global failure triggered by
OperatorCoordinator for 'Source: Customer Product Summary Selected' (operator
Hi Ajinkya
I think you can try to decrease the size of batch shuffle with config
`taskmanager.memory.framework.off-heap.batch-shuffle.size` if the data
volume of your job is small, the default value is `64M`. You can find more
information in doc [1]
[1]
https://github.com/apache/flink/blob/master
Ajinkya Pathrudkar
10:53 AM (1 hour ago)
to user-info
I hope this email finds you well. I am writing to inform you of a recent
update we made to our Flink version, upgrading from 1.14 to 1.15, along
with a shift from Java 8 to Java 11. Since the update, we have encountered
out-of-memory (direct me
Dependending on the build system used, you could check the dependency tree,
e.g. for Maven it would be `mvn dependency:tree
-Dincludes=org.apache.parquet`
Matthias
On Wed, Jun 30, 2021 at 8:40 AM Thomas Wang wrote:
> Thanks Matthias. Could you advise how I can confirm this in my environment?
>
Thanks Matthias. Could you advise how I can confirm this in my environment?
Thomas
On Tue, Jun 29, 2021 at 1:41 AM Matthias Pohl
wrote:
> Hi Rommel, Hi Thomas,
> Apache Parquet was bumped from 1.10.0 to 1.11.1 for Flink 1.12 in
> FLINK-19137 [1]. The error you're seeing looks like some dependen
Hi Rommel, Hi Thomas,
Apache Parquet was bumped from 1.10.0 to 1.11.1 for Flink 1.12 in
FLINK-19137 [1]. The error you're seeing looks like some dependency issue
where you have a version other than 1.11.1
of org.apache.parquet:parquet-column:jar on your classpath?
Matthias
[1] https://issues.apac
To give more information
parquet-avro version 1.10.0 with Flink 1.11.2 and it was running fine.
now Flink 1.12.1, the error msg shows up.
Thank you for help.
Rommel
On Tue, Jun 22, 2021 at 2:41 PM Thomas Wang wrote:
> Hi,
>
> We recently upgraded our Flink version from 1.11.2 to 1.12.1 a
Hi,
We recently upgraded our Flink version from 1.11.2 to 1.12.1 and one of our
jobs that used to run ok, now sees the following error. This error doesn't
seem to be related to any user code. Can someone help me take a look?
Thanks.
Thomas
java.lang.NoSuchMethodError:
org.apache.parquet.column.
Could you share a full stacktrace with us? Could you check the stack
trace also in the task managers logs?
As a side note, make sure you are using the same version of all Flink
dependencies.
Best,
Dawid
On 17/03/2021 06:26, soumoks wrote:
> Hi,
>
> We have upgraded an application originally wri
Hi,
We have upgraded an application originally written for Flink 1.9.1 with
Scala 2.11 to Flink 1.11.2 with Scala 2.12.7 and we are seeing the following
error at runtime.
2021-03-16 20:37:08
java.lang.RuntimeException
at
org.apache.flink.streaming.runtime.io.RecordWriterOutput.pushToRecordWrit
/flink-docs-release-1.10/ops/upgrading.html#compatibility-table
2. Yes, you need to recompile (but ideally you don't need to
change anything).
On Mon, Apr 6, 2020 at 10:19 AM Stephen Connolly
mailto:stephen.alan.conno...@gmail.com>> wrote:
Quick quest
ing.html#compatibility-table
>>
>> 2. Yes, you need to recompile (but ideally you don't need to change
>> anything).
>>
>>
>>
>> On Mon, Apr 6, 2020 at 10:19 AM Stephen Connolly <
>> stephen.alan.conno...@gmail.com> wrote:
>>
>>>
.apache.org/projects/flink/flink-docs-release-1.10/ops/upgrading.html#compatibility-table
>
> 2. Yes, you need to recompile (but ideally you don't need to change
> anything).
>
>
>
> On Mon, Apr 6, 2020 at 10:19 AM Stephen Connolly <
> stephen.alan.conno...@gmail.com>
ects/flink/flink-docs-release-1.10/ops/upgrading.html#compatibility-table
2. Yes, you need to recompile (but ideally you don't need to change
anything).
On Mon, Apr 6, 2020 at 10:19 AM Stephen Connolly
<mailto:stephen.alan.conno...@gmail.com>> wrote:
Quick questions on upgra
nolly <
stephen.alan.conno...@gmail.com> wrote:
> Quick questions on upgrading Flink.
>
> All our jobs are compiled against Flink 1.8.x
>
> We are planning to upgrade to 1.10.x
>
> 1. Is the recommended path to upgrade one minor at a time, i.e. 1.8.x ->
> 1.9.x and th
Quick questions on upgrading Flink.
All our jobs are compiled against Flink 1.8.x
We are planning to upgrade to 1.10.x
1. Is the recommended path to upgrade one minor at a time, i.e. 1.8.x ->
1.9.x and then 1.9.x -> 1.10.x as a second step or is the big jump
supported, i.e. 1.8.x ->
Judging from the stack trace the state should be part of the operator
state and not the partitioned RocksDB state. If you have implemented
the Checkpointed interface anywhere, that would be a good place to
pinpoint the anonymous class. Is it possible to share the job code?
– Ufuk
On Fri, Jul 1, 2
Ah, this might be in code that runs at a different layer from the
StateBackend. Can you maybe pinpoint which of your user classes is this
anonymous class and where it is used? Maybe by replacing them by
non-anonymous classes and checking which replacement fixes the problem.
-
Aljoscha
On Fri, 1 J
I've just double checked and I do still get the ClassNotFound error for an
anonymous class, on a job which uses the RocksDBStateBackend.
In case it helps, this was the full stack trace:
java.lang.RuntimeException: Failed to deserialize state handle and
setup initial operator state.
at org
Thanks guys, that's very helpful info!
@Aljoscha I thought I saw this exception on a job that was using the
RocksDB state backend, but I'm not sure. I will do some more tests today to
double check. If it's still a problem I'll try the explicit class
definitions solution.
Josh
On Thu, Jun 30, 201
Also, you're using the FsStateBackend, correct?
Reason I'm asking is that the problem should not occur for the RocksDB
state backend. There, we don't serialize any user code, only binary data. A
while back I wanted to change the FsStateBackend to also work like this.
Now might be a good time to ac
Hi Josh,
you could also try to replace your anonymous classes by explicit class
definitions. This should assign these classes a fixed name independent of
the other anonymous classes. Then the class loader should be able to
deserialize your serialized data.
Cheers,
Till
On Thu, Jun 30, 2016 at 1:
Hi Josh,
I think in your case the problem is that Scala might choose different names
for synthetic/generated classes. This will trip up the code that is trying
to restore from a snapshot that was done with an earlier version of the
code where classes where named differently.
I'm afraid I don't kno
Hi Josh,
You have to assign UIDs to all operators to change the topology. Plus,
you have to add dummy operators for all UIDs which you removed; this
is a limitation currently because Flink will attempt to find all UIDs
of the old job.
Cheers,
Max
On Wed, Jun 29, 2016 at 9:00 PM, Josh wrote:
> H
Hi all,
Is there any information out there on how to avoid breaking saved
states/savepoints when making changes to a Flink job and redeploying it?
I want to know how to avoid exceptions like this:
java.lang.RuntimeException: Failed to deserialize state handle and
setup initial operator state.
26 matches
Mail list logo