NO :) ...
I usually using the web UI .
Can you refer me to some example how to submit a job ?
Using REST ? to which port ?
thanks,
miki
On Tue, Apr 24, 2018 at 9:42 AM, Gary Yao wrote:
> Hi Miki,
>
> Did you try to submit a job? With the introduction of FLIP-6, resources are
> allocated dyna
Hi Alex,
Can you try add the following two lines to your flink-conf.yaml?
akka.framesize: 314572800b
akka.client.timeout: 10min
AFAIK it is not needed to use Java system properties here.
Best,
Gary
On Mon, Apr 23, 2018 at 8:48 PM, Alex Soto wrote:
> Hello,
>
> I am using Flink version 1.
HI ,
Assuming that your looking for streaming use case , i think this is a
better approach
1. Send Avro from logstash ,better performance.
2. Deserialize it to POJO .
3. Do logic...
On Mon, Apr 23, 2018 at 4:03 PM, Lehuede sebastien
wrote:
> Hi Guys,
>
> I'm actually trying to un
Hi Miki,
Did you try to submit a job? With the introduction of FLIP-6, resources are
allocated dynamically.
Best,
Gary
On Tue, Apr 24, 2018 at 8:31 AM, miki haiat wrote:
>
> HI,
> Im trying to tun flip-6 on mesos but its not clear to me what is the
> correct way to do it .
>
> I run the sessi
HI,
Im trying to tun flip-6 on mesos but its not clear to me what is the
correct way to do it .
I run the session script and i can see that new framework has been created
in mesos but the task manager hasn't been created
running taskmanager-flip6.sh throw null pointer ...
what is the correct
I noticed trailing 'b' in your setting.
Have you tried without the trailing 'b' ?
Cheers
On Mon, Apr 23, 2018 at 11:48 AM, Alex Soto wrote:
> Hello,
>
> I am using Flink version 1.4.2. When I try to run my application on a
> Yarn cluster, I get this error:
>
> 2018-04-23 14:08:13,346 ERROR ak
Hi Alex,
Given that you’re hitting a DB, the approach of using multi-threaded access
from a CoFlatMapFunction or AsyncFunction makes sense - you don’t want to try
to abuse Flink’s parallelism.
I’ve done it both ways, so either is an option.
If you use an AsyncFunction, you get the benefit of c
Hello,
I am using Flink version 1.4.2. When I try to run my application on a Yarn
cluster, I get this error:
2018-04-23 14:08:13,346 ERROR akka.remote.EndpointWriter
- Transient association error (association remains live)
akka.remote.OversizedPayloadExceptio
Hi.
What kind of problems and what configuration should we be aware of.?
Med venlig hilsen / Best regards
Lasse Nedergaard
> Den 23. apr. 2018 kl. 13.44 skrev Jörn Franke :
>
> I would disable it if possible and use the Flink parallism. The threading
> might work but can create operational i
I have been using the kafka connector sucessfully for a while now. But, am
getting weird results in one case.
I have a test that submits 3 streams to kafka topics, and monitors them on a
separate process. The flink job has a source for each topic, and one such is
fed to 3 separate map functio
Thanks a lot Ted!
I will look into it! If someone else could elaborate on the other bullets it
would be great.
Best,
Max
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
For #4, there was past thread:
http://search-hadoop.com/m/Flink/VkLeQPK4Ek2dYiZS1?subj=Re+Flink+on+Azure+HDInsight
You can find related information on Azure Table in:
docs/dev/batch/connectors.md
FYI
On Mon, Apr 23, 2018 at 4:13 AM, m@xi wrote:
> Hey Michael! Thanks a lot for your answer.
>
barrier n appearing in all the streams serves as synchronization point.
As explained in the subsequent paragraph:
bq. Otherwise, it would mix records that belong to snapshot *n*and with
records that belong to snapshot *n+1*.
Cheers
On Mon, Apr 23, 2018 at 7:21 AM, Alexander Smirnov <
alexander.
Hi,
I'm reading documentation about checkpointing:
https://ci.apache.org/projects/flink/flink-docs-master/internals/stream_checkpointing.html
It describes a case, when an operator receives data from all its incoming
streams alongs with barriers. There's also an illustration on that page for
the ca
Hi Esa,
there's no built-in support for handling spatial data in Flink.
However, you can use any JVM-based spatial library in your library to
perform such computations.
One option is the ESRI library [1].
Also there is a JIRA issue [2] to add support for a few spatial functions
(as provided by Ca
Hi,
I just realized that the documentation completely lacks a section about
implementing custom source connectors :-(
The JavaDocs of the relevant interface classes [1] [2] [3] [4] are quite
extensive though.
I'd also recommend to have a look at the implementations of other source
connectors like
Hi
How Flink can deal with a spatial (position of coordinates) data ?
For example, I would want to check the coordinates of some moving object have
crossed some boundary.
Is that possible ?
Best, Esa
Hi Guys,
I'm actually trying to understand the purpose of Table and in particular
KafkaJsonTableSource. I try to see if for my use case ths can be usefull.
Here is my context :
I send logs on logstash, i add some information (Type, Tags), Logstash send
logs to Kafka in JSON format and finally i
Hi Timo,
Thanks for your response. We are using the filesystem backend backed by S3.
We were looking for a good long term solution with Avro, so manually
changing the serial version id is probably not the right way to proceed for
us. I think we will wait for Flink1.6 before trying to properly imp
Hi Miguel,
Actually, a lot has changed since 1.4.
Flink 1.5 will feature a completely (cluster) setup and deployment model.
The dev effort is known as FLIP-6 [1].
So it is not unlikely that you discovered a regression.
Would you mind opening a JIRA ticker for the issue?
Thank you very much,
Fabi
I would disable it if possible and use the Flink parallism. The threading
might work but can create operational issues depending on how you configure
your resource manager.
> On 23. Apr 2018, at 11:54, Alexander Smirnov
> wrote:
>
> Hi,
>
> I have a co-flatmap function which reads data from
Hi Fabian,
please share the workarounds, that must be helpful for my case as well
Thank you,
Alex
On Mon, Apr 23, 2018 at 2:14 PM Fabian Hueske wrote:
> Hi Miki,
>
> Sorry for the late response.
> There are basically two ways to implement an enrichment join as in your
> use case.
>
> 1) Keep t
Hi Miki,
Sorry for the late response.
There are basically two ways to implement an enrichment join as in your use
case.
1) Keep the meta data in the database and implement a job that reads the
stream from Kafka and queries the database in an ASyncIO operator for every
stream record. This should b
Hey Michael! Thanks a lot for your answer.
1 -- OK then. Seems that Kafka version 0.11 is the most preferable since it
supports exactly-once semantics.
2 -- I have implemented my algorithm in Flink but I would like to implement
it on Kafka streams. All of them should run on a Flink cluster (stand
Hi,
The semantics of the joins offered by the DataStream API in Flink 1.4 and
before as well as the upcoming 1.5 version are a bit messed up, IMO.
Since Flink 1.4, Flink SQL implements a better windowed join [1].
DataStream and SQL can be easily integrated with each other.
A similar implementatio
That's absolutely no problem Tzu-Li. Either of them would work. Thank you!
On Thu, Apr 19, 2018 at 4:56 PM Tzu-Li (Gordon) Tai
wrote:
> @Alexander
> Sorry about that, that would be my mistake. I’ll close FLINK-9204 as a
> duplicate and leave my thoughts on FLINK-9155. Thanks for pointing out!
>
Hi,
I have a co-flatmap function which reads data from external DB on specific
events.
The API for the DB layer is homegrown and it uses multiple threads to speed
up reading.
Can it cause any problems if I use the multithreading API in the flatmap1
function? Is it allowed in Flink?
Or, maybe I s
27 matches
Mail list logo