fo, or else what exactly is the exception
> there.
>
> Best,
> Stefan
>
> On 10. Dec 2018, at 13:35, Akshay Mendole wrote:
>
> Hi,
>I have been facing issues while trying to read from a hdfs sequence
> file.
>
> This is my code snippet
>
> D
Could anyone please help me with this?
Thanks,
Akshay
On Mon, 10 Dec 2018, 6:05 pm Akshay Mendole Hi,
>I have been facing issues while trying to read from a hdfs sequence
> file.
>
> This is my code snippet
>
> DataSource> input = env
> .createInput(Hadoo
Hi,
I have been facing issues while trying to read from a hdfs sequence file.
This is my code snippet
DataSource> input = env
.createInput(HadoopInputs.readSequenceFile(Text.class, Text.class,
ravenDataDir),
TypeInformation.of(new TypeHint>() {
}));
Upon executing this
cover this overhead memories, or set one slot for each task
> manager.
>
> Best,
> Zhijiang
>
> --
> 发件人:Akshay Mendole
> 发送时间:2018年11月23日(星期五) 02:54
> 收件人:trohrmann
> 抄 送:zhijiang ; user ;
> Shreesha M
our
> join result is quite large. One record is 1 GB large. Try to decrease it or
> give more memory to your TMs.
>
> Cheers,
> Till
>
> On Thu, Nov 22, 2018 at 1:08 PM Akshay Mendole
> wrote:
>
>> Hi Zhijiang,
>> Thanks for the quick reply. My
Hi,
We are converting one of our pig pipelines to flink using apache beam.
The pig pipeline reads two different data sets (R1 & R2) from hdfs,
enriches them, joins them and dumps back to hdfs. The data set R1 is
skewed. In a sense, it has few keys with lot of records. When we converted
the