Hi guys,
Recently I push a PR on apache/flink repo(https://github.com/
apache/flink/pull/5924), it's about a trivial update on README.md, raising
once I surprisingly failed to build using Java 9.
It is good that someone just tell me that it is meaningless so that I could
close it, but no
We had a job on a Flink 1.4.2 cluster with three TMs experience an odd
failure the other day. It seems that it started as some sort of network
event.
It began with the 3rd TM starting to warn every 30 seconds about socket
timeouts while sending metrics to DataDog. This latest for the whole
Thanks again!
This is strange. With both Flink 1.3.3 and Flink 1.6.0-SNAPSHOT and
1) copying gradoop-demo-shaded.jar to /lib, and
2) using RemoteEnvironment with just jmHost and jmPort (no Jarfiles)
I get the same exception [1], caused by:
*Caused by: com.typesafe.config.ConfigException$Missing:
Hi Michael!
Seems that you were correct. It is weird that I could not set parallelism =
136.
I cannot configure the cluster properly so far. I do everything as it is
described here [1].
It seems that the JobManager is not reachable.
Best,
Max
[1] --
Are you sure rate limit is coming from KinesisProducer?
If yes, Kinesis support 1000 record write per sec per shard. if you hit the
limit, just increase your shard.
On Fri, Apr 27, 2018 at 8:58 AM, Urs Schoenenberger <
urs.schoenenber...@tngtech.com> wrote:
> Hi all,
>
> we are struggling with
Any itterable of Tuples will work for a for loop: List, Set, etc.
Michael
> On Apr 27, 2018, at 10:47 AM, Soheil Pourbafrani
> wrote:
>
> Thanks, what did you consider the return type of parse method? Arraylist of
> tuples?
>
> On Friday, April 27, 2018, TechnoMage
Hi all,
we are struggling with RateLimitExceededExceptions with the Kinesis
Producer. The Flink documentation claims that the Flink Producer
overrides the RateLimit setting from Amazon's default of 150 to 100.
I am wondering whether we'd need 100/($sink_parallelism) in order for it
to work
it would look more like:
for (Tuple2<> t2 : parse(t.f3) {
collector.collect(t2);
}
Michael
> On Apr 27, 2018, at 9:08 AM, Soheil Pourbafrani wrote:
>
> Hi, I want to use flatMap to pass to function namely 'parse' a tuple and it
> will return multiple tuple,
Hi, I want to use flatMap to pass to function namely 'parse' a tuple and it
will return multiple tuple, that each should be a record in datastream
object.
Something like this:
DataStream> res = stream.flatMap(new
FlatMapFunction, Tuple2>()
Got it, Thanks a lot Fabian. Looking forward to seeing your book.
Best,
Chengzhi
On Thu, Apr 26, 2018 at 4:02 PM, Fabian Hueske wrote:
> You can also merge all three types into an nary-Either type and union all
> three inputs together.
> However, Flink only supports a binary
Thank you
2018-04-27 14:55 GMT+02:00 Chesnay Schepler :
> I've responded in the JIRA.
>
>
> On 27.04.2018 14:26, Edward Rojas wrote:
>
>> I'm preparing to migrate my environment from Flink 1.4 to 1.5 and I found
>> this issue.
>>
>> Every time I try to use the flink CLI with
Thanks. Tests and the example folder will help.
--
Dhruv Kumar
PhD Candidate
Department of Computer Science and Engineering
University of Minnesota
www.dhruvkumar.me
> On Apr 27, 2018, at 06:47, Hung wrote:
>
> in my
I've responded in the JIRA.
On 27.04.2018 14:26, Edward Rojas wrote:
I'm preparing to migrate my environment from Flink 1.4 to 1.5 and I found
this issue.
Every time I try to use the flink CLI with the -m option to specify the
jobmanager address, the CLI get stuck on "Waiting for response..."
I'm preparing to migrate my environment from Flink 1.4 to 1.5 and I found
this issue.
Every time I try to use the flink CLI with the -m option to specify the
jobmanager address, the CLI get stuck on "Waiting for response..." and I
get the following error on the Jobmanager:
WARN
in my case I usually check the tests they write for each function I want to
use.
Take CountTrigger as an example, if I want to customize my own way of
counting, I will have a look at
the test the write
Ah, found the place! In my case they seem to be going to
/home/hadoop/flink-1.5-SNAPSHOT/log/flink-hadoop-client-ip-10-0-10-29.log
(for example).
Any reason why these can't be shown in Flink UI, maybe in jobmanager log?
On Fri, Apr 27, 2018 at 12:13 PM, Juho Autio wrote:
The logs logged by my job jar before env.execute can't be found in
jobmanager log. I can't find them anywhere else either.
I can see all the usual logs by Flink components in the jobmanager log,
though. And in taskmanager log I can see both Flink's internal & my
application's logs from the
If a job finishes very quickly it commonly happens that metrics are not
exposed; the WebUI periodically polls metrics, but the polling only
works while the job is running. Reporters that expose metrics
periodically show the same behavior.
On 27.04.2018 09:49, Ganesh Manal wrote:
Hi,
The
Hello
I would like to know the procedure for assigning the jira issue. How can I
assign it to myself?
Thanks
Hi,
The custom counter/guage metric are generated only in special case.
Like if I execute wordcount example with hdfs file as input, I am able to see
the counters.
But in case, wordcount is executed without hdfs file as input, i.e. using
default Datasets, metric counters are not generated.
I
First, a small correction for my previous mail:
I could reproduce your problems locally when submitting the fat-jar.
Turns out i never submitted the far-jar, as i didn't pass the jar file
argument to RemoteEnvironment.
Now on to your questions:
/What version of Flink are you trying with?//
It would be helpful if you provide the complete CLI logs. Because even I'm
using flink run command to submit jobs to flink jobmanager running on K8s
and its working fine. For remote execution using flink CLI you should
provide flink-conf.yaml file which contains job manager address, port and
22 matches
Mail list logo