Hi guys,
Noticing similar things as Arvid mentions. I currently solved the issue by
also supporting GenericRecords written and read with the old schema and
parse them to the new schema myself. This at least gives us the evolution
until state migration is there.
Thanks for your help!
Cheers,
Hey, flink community,
I have a question on backfill data and want to get some ideas on how people
think.
I have a stream of data using BucketingSink to S3 then to Redshift. If
something changed with the logic in flink and I need to backfill some
dates, for example, we are streaming data for
Hello,
I’m using Flink 1.4.0 with FlinkKafkaConsumer010 and have been for almost a
year. Recently, I started getting messages of the wrong length in Flink
causing my deserializer to fail. Let me share what I’ve learned:
1. All of my messages are 520 bytes exactly when my producer places
Hi Nico,
> On Feb 26, 2018, at 9:41 AM, Nico Kruber wrote:
>
> Hi Ken,
> LocalFlinkMiniCluster should run checkpoints just fine. It looks like it
> was attempting to even create one but could not finish. Maybe your
> program was not fully running yet?
In the logs I see:
I am seeing a hang where the main thread of CliFrontend goes to timed
waiting. This appears like a livelock. My local setup is simple: A job
manager, a task manager on MacOS. My algorithm is based on Gelly's vertex
centric computation. The resultant graph's vertex count is about 4
million. I am
We could not recreate in a controlled setup, but here are a few notes that
we have gathered on a simple "times(n),within(..)"
In case where the Event does not create a Final or Stop state
* As an NFA processes an Event, NFA mutates if there is a true Event. Each
computation is a counter that
Thanks a lot!
On Mon, Feb 26, 2018 at 9:19 AM, Nico Kruber wrote:
> Judging from the code, you should separate different jars with a colon
> ":", i.e. "—addclasspath jar1:jar2"
>
>
> Nico
>
> On 26/02/18 10:36, kant kodali wrote:
> > Hi Gordon,
> >
> > Thanks for the
Hi Ken,
LocalFlinkMiniCluster should run checkpoints just fine. It looks like it
was attempting to even create one but could not finish. Maybe your
program was not fully running yet?
Can you tell us a little bit more about your set up and how you
configured the LocalFlinkMiniCluster?
Nico
On
Hi,
without knowing Gelly here, maybe it has to do something with cleaning
up the allocated memory as mentioned in [1]:
taskmanager.memory.preallocate: Can be either of true or false.
Specifies whether task managers should allocate all managed memory when
starting up. (DEFAULT: false). When
Judging from the code, you should separate different jars with a colon
":", i.e. "—addclasspath jar1:jar2"
Nico
On 26/02/18 10:36, kant kodali wrote:
> Hi Gordon,
>
> Thanks for the response!! How do I add multiple jars to the classpaths?
> Are they separated by a semicolon and still using one
Actually, I remembered why we didn't enable it by default. The problem with
this feature is the following: In case of a JM failover it could happen
that all TMs think they got quarantined because the JM ActorSystem is no
longer reachable. Therefore, you could see a lot of TM restarts in this
case
Hi,
it is correct that once a Flink component gets quarantined, e.g. lost
ActorSystem message or heartbeat timeout, it will never be able to talk to
the quarantined or quarantining system. The only solution is to restart the
respective component. In order to do this automatically, we introduced
I had to solve a similar problem, we use a process function with rocksdb and
map state for the sub keys. So while we hit rocks on every element, only the
specified sub keys are ever read from disk.
Seth Wiesman| Software Engineer4 World Trade Center, 46th Floor, New York, NY
I would like to create a custom aggregator function for a windowed KeyedStream
which I have complete control over - i.e. instead of implementing an
AggregatorFunction, I would like to control the lifecycle of the flink state by
implementing the CheckpointedFunction interface, though I still
I’d want to write simple Scala code that: 1) reads data of csv-file (time
series data, where one column is timestamp) 2) converts data of csv-file
compatible for CEP 3) sets pattern for CEP 4) Runs CEP 5) writes results. (I
would very much like to find a complete example of this.)
What
Hi Gordon,
Thanks for the response!! How do I add multiple jars to the classpaths? Are
they separated by a semicolon and still using one flag like "—addclasspath
jar1; jar2" or specify the flag multiple times like "—addclasspath
jar1 —addclasspath
jar2" or specify just the directory
Hi,
If I understood your problem correctly, you want to join two records, one
from each windowed stream.
You can do this by keying and connecting the two streams and apply a
stateful CoFlatMapFunction or CoProcessFunction to join them.
DataStream windowed1 = ...
DataStream windowed2 = ...
Hi
Whether these instructions of IDE are only for Java, but no or Scala ?
Best,
Esa
From: xingcan [mailto:xc...@foxmail.com]
Sent: Saturday, February 24, 2018 3:25 AM
To: Esa Heikkinen
Cc: user@flink.apache.org
Subject: Re: Is Flink easy to deploy ?
Hi Esa,
Hi,
Queryable state only supports key point queries, i.e., you can query a
keyed state for the value of a key.
Support for SQL is not on the roadmap.
Best, Fabian
2018-02-25 14:26 GMT+01:00 kant kodali :
> Hi All,
>
> 1) Does Queryable State support SQL? By which I mean I
19 matches
Mail list logo