Hi chrisr,
It seems there are no "single line" ways to solve your problem. To print
results on screen, you can use the DataStream.print() / DataSet.print()
method, and to limit the output you can add a FilterFunction. The code
looks like:
Table projection1 = customers
>
Hi, Rinat
I tried this situation you said and it works fine for me. The partCounter
incremented as we hope. When the new part file is created, I did not see any
same part index. Here is my code for that, you can take a look.
In my case, the max index of part file is part-0-683PartSuffix, other
Is there a simple way to output the first few rows of a Flink table to stdout
when developing an application? I just want to see the first 10-20 rows on
screen
during development to make sure my logic is correct.
There doesnt seem to be something like print(10) in the API to see the first
n
I originally posted to Stack Overflow because I was trying to figure out he
to do this in Flink.
Wondering how to implement something of the sort:
(1) *write up: *
https://drive.google.com/file/d/0Bw69DO1tid2_SzVVendtUV9WMVdIUXptQ1hHSl9KNjAyMTBn/view?usp=drivesdk
(2) *original post: *
Hi mates, could anyone please have a look on my PR, that fixes issue of
incorrect indexing in BucketingSink component ?
Thx
> On 18 Jun 2018, at 10:55, Rinat wrote:
>
> I’ve created a JIRA issue https://issues.apache.org/jira/browse/FLINK-9603
>
Thanks Fabian! This seems to be the way to go
On Tue, Jun 19, 2018 at 12:18 PM Fabian Hueske wrote:
> Hi Johannes,
>
> You are right. You should approach the problem with the semantics that you
> need before thinking about optimizations such as state size.
>
> The Table API / SQL offers (in
Actually, yes. I have a job already running with "FieldSerializer" in
production. Any insights will be appreciated.
On Sat, Jun 23, 2018 at 7:39 AM, Vishal Santoshi
wrote:
> Thanks.
>
> On Thu, Jun 21, 2018 at 4:34 AM, Tzu-Li (Gordon) Tai
> wrote:
>
>> Hi Vishal,
>>
>> Kryo has a serializer
Thanks.
On Thu, Jun 21, 2018 at 4:34 AM, Tzu-Li (Gordon) Tai
wrote:
> Hi Vishal,
>
> Kryo has a serializer called `CompatibleFieldSerializer` that allows for
> simple backward compatibility changes, such as adding non-optional fields /
> removing fields.
>
> If using the KryoSerializer is a
1.
Can or has any one done a rolling upgrade from 1.4 to 1.5 ? I am not
sure we can. It seems that JM cannot recover jobs with this exception
Caused by: java.io.InvalidClassException:
org.apache.flink.runtime.jobgraph.tasks.CheckpointCoordinatorConfiguration;
local class incompatible: stream
Yes, it should be exit. Thanks to Ted Yu. Very exactly!
Cheers
Zhangminglei
> 在 2018年6月23日,下午12:40,Ted Yu 写道:
>
> For #1, the word exist should be exit, right ?
> Thanks
>
> Original message
> From: zhangminglei <18717838...@163.com>
> Date: 6/23/18 10:12 AM (GMT+08:00)
>
10 matches
Mail list logo