it might be related to this issue
https://issues.apache.org/jira/browse/FLINK-10774
On Tue, Mar 26, 2019 at 4:35 PM Fritz Budiyanto wrote:
> Hi All,
>
> We're using Flink-1.4.2 and noticed many dangling connections to Kafka
> after job deletion/recreation. The trigger here is Job cancelation/fai
Hi,
> Would then the assumption that this possibility ( part reported length >
> part file size ( reported by FileStatus on NN) ) is only attributable to
> this edge case be correct ?
Yes, I think so.
> Or do you see a case where in though the above is true, the part file would
> need tru
Hi All,
We're using Flink-1.4.2 and noticed many dangling connections to Kafka after
job deletion/recreation. The trigger here is Job cancelation/failure due to
network down event followed by Job recreation.
Our flink job has checkpointing disabled, and upon job failure (due to network
failure
Sorry to flood this thread, but keeping my experiments:
so far i've been using retract to a Row and then mapping to a dynamic pojo
that is created (using ByteBuddy) according to the select fields in the SQL.
Considering the error I'm trying now to remove thr usage in Row and use the
dynamic type d
Debugging locally it seems like the state descriptor of "GroupAggregateState"
is creating an additional field (TypleSerializer of SumAccumulator)
serializer within the RowSerializer. Im guessing this is what causing
incompatibility? Is there any work around i can do?
--
Sent from: http://apache-
Hi Fabian,
It seems like it didn't work.
Let me specify what i have done:
i have a SQL that looks something like:
Select a, sum(b), map[ 'sum_c', sum(c), 'sum_d', sum(d)] as my_map FROM...
GROUP BY a
As you said im preventing keys in the state forever by doing idle state
retention time (+ im tr
Hi,
I did write recently about our problems with 1.7.2 for which we still
haven't found a solution and the cluster is very unstable. I am trying to
point now to a different problem that maybe it is related somehow and we
don't understand.
When we restart a Flink Session in Yarn, we see it takes a
Hello,
Can someone please explain me the functionality of blob server in flink ?
Thanks,
Manju
There are 2 out files and a log file. I will describe the steps to take
below, but I have not found anything to indicate there's a problem when it
cannot allocate any resources (but otherwise runs).
--start default free tier aws instanceedit security group to allow 8081
incomingsudo yu
Thanks Yun Tang, we will keep that in mind!
> 26 марта 2019 г., в 11:27, Yun Tang написал(а):
>
> Hi Llya
>
> I believe Java is the main implementation language of Flink internals,
> flink-core is the kernel module and implemented in Java.
>
> What's more:
> FILP6: Replace Scala implemented J
Thank you for your email.
Would then the assumption that this possibility ( part reported length >
part file size ( reported by FileStatus on NN) ) is only attributable to
this edge case be correct ?
Or do you see a case where in though the above is true, the part file would
need truncation as a
Hi Vishal,
I’ve come across the same problem. The problem is that by default the file
length is not updated when the output stream is not closed properly.
I modified the writer to update file lengths on each flush, but it comes with
some overhead, so this approach should be used when strong con
Hi Konstantin
I think there is no direct relationship between registering
chill-protobuf/chill-thrift for Protobuf/Thrift type with Kryo and enforcing
POJO to use Kryo.
For both Protobuf and Thrift types, they will be extracted as GenericTypeInfo
within TypeExtractor which would use Kryoseriali
Hi Jacky,
you're right that we are currently lacking documentation for the
`mesos-appmaster-job.sh` script. I've added a JIRA issue to cover this [1].
In order to use this script you first need to store a serialized version of
the `JobGraph` you want to run somewhere where the Mesos appmaster can
Good point, Konstantin, that makes sense.
On Tue, Mar 26, 2019 at 10:37 AM Konstantin Knauf
wrote:
> Hi Stephan,
>
> I am in favor of renaming forceKryo() instead of removing it, because users
> might plugin their Protobuf/Thrift serializers via Kryo as advertised in
> our documentation [1]. For
Hi,
I have a job (with Flink 1.6.4) which uses rocksdb incremental checkpointing,
but the checkpointing always fails with `IllegalStateException`,
because hen performing `RocksDBIncrementalSnapshotOperation`, rocksdb finds
that `localBackupDirectory`, which should be created earlier
by rocksdb
Hi Stephan,
I am in favor of renaming forceKryo() instead of removing it, because users
might plugin their Protobuf/Thrift serializers via Kryo as advertised in
our documentation [1]. For this, Kryo needs to be used for POJO types as
well, if I am not mistaken.
Cheers,
Konstantin
[1]
https://ci
Thanks for bringing up this DISCUSS Timo!
Java Expression DSL is pretty useful for java user. When we have the Java
Expression DSL, Java API will become very rich and easy to use!
+1 from my side.
Best,
Jincheng
Dawid Wysakowicz 于2019年3月26日周二 下午5:08写道:
> Hi,
>
> I really like the idea of int
Hi Community,
Can anyone help me understand the execution sequence in batch mode?
1. Can I set slot isolation in batch mode? I can only find the
slotSharingGroup API in streaming mode.
2. When multiple data source parallel instances are allocated to the same
slot, how does flink run those data s
Hi,
I really like the idea of introducing Java Expression DSL. I think this
will solve many problems e.g. right now it's quite tricky how string
literals work in scala (sometimes it might go through the
ExpressionParser and it will end up as an UnresolvedFieldReference),
another important problem
Hi luyj,
Currently, TableAPI does not have the trigger, due to the behavior of the
windows(unbounded, tumble, slide, session) is very clear.The behavior of
each window is as follows:
- Unbounded Window - Each set of keys is a grouping, and each event
triggers a calculation.
- Tumble Window
Compatibility is really important for checkpointed state.
For that, you can always directly specify GenericTypeInfo or AvroTypeInfo
if you want to continue to treat a type via Kryo or Avro.
Alternatively, once https://issues.apache.org/jira/browse/FLINK-11917 is
implemented, this should happen aut
Hi Llya
I believe Java is the main implementation language of Flink internals,
flink-core is the kernel module and implemented in Java.
What's more:
FILP6: Replace Scala implemented JobManager.scala and TaskManager.scala to new
JobMaster.java and TaskExecutor.java
FILP32: Make flink-table Scala
Hello,
I am using Table API to do some aggregation based on time window.
In DataStream API, there is trigger to control when the aggregation
function should be invoked. Is there similar thing in Table API?
Because I am using large time window, like a day. I want the intermediate
result every tim
Hi Stephan
I prefer to remove 'enableForceKryo' since Kryo serializer does not work
out-of-the-box well for schema evolution stories due to its mutable properties,
and our built-in POJO serializer has already supported schema evolution.
On the other hand, what's the backward compatibility plan
Hello,
our dev-team is choosing a language for developing Flink jobs. Most likely that
we will use flink-streaming api (at least in the very beginning). Because of
Spark jobs developing experience we had before the choice for now is scala-api.
However recently I’ve found a
ticket(https://issues
26 matches
Mail list logo