t;
> Best,
> Gary
>
> On Mon, Apr 15, 2019 at 9:26 PM Hao Sun wrote:
>
>> Hi, I can not find the root cause of this, I think hadoop version is
>> mixed up between libs somehow.
>>
>> --- ERROR ---
>> java.text.ParseException: inconsistent module descriptor
Hi,
Can you describe how to reproduce this?
Best,
Gary
On Mon, Apr 15, 2019 at 9:26 PM Hao Sun wrote:
> Hi, I can not find the root cause of this, I think hadoop version is mixed
> up between libs somehow.
>
> --- ERROR ---
> java.text.ParseException: inconsistent module descri
Hi, I can not find the root cause of this, I think hadoop version is mixed
up between libs somehow.
--- ERROR ---
java.text.ParseException: inconsistent module descriptor file found in '
https://repo1.maven.org/maven2/org/apache/flink/flink-shaded-hadoop2-uber/2.8.3-1.8.0/flink-shaded-hadoop2
Hi,
I’m using Flink 1.5.6 and Hadoop 2.7.1.
My requirement is to read hdfs sequence file (SequenceFileInputFormat), then
write it back to hdfs (SequenceFileAsBinaryOutputFormat with compression).
Below code won’t work until I copy the flink-hadoop-compatibility jar to
FLINK_HOME/lib. I find
uf-java
2.5.0
org.apache.flink
flink-hadoop-compatibility_${scala.binary.version}
${flink.version}
org.slf4j
slf4j-api
1.7.7
org.slf4j
slf4j-log4j12
1.7.7
runtime
log4j
log4j
1.2.17
runtime
org.apache.maven.plugins
maven-compiler-plugin
3.1
${java.version}
$
Hi,
Packaging the flink-hadoop-compatibility dependency with your code into a
"fat" job jar should work as well.
Best,
Fabian
Am Mi., 10. Apr. 2019 um 15:08 Uhr schrieb Morven Huang <
morven.hu...@gmail.com>:
> Hi,
>
>
>
> I’m using Flink 1.5.6 and Hadoop
Hi,
I’m using Flink 1.5.6 and Hadoop 2.7.1.
*My requirement is to read hdfs sequence file (SequenceFileInputFormat),
then write it back to hdfs (SequenceFileAsBinaryOutputFormat with
compression).*
Below code won’t work until I copy the flink-hadoop-compatibility jar to
FLINK_HOME/lib. I
月4日周四 上午2:04写道:
> Hi all:
>
> I tried to install flink-1.7.2 free hadoop version on Azure with hadoop
> 2.7.
>
> And when I start to submit a flink job to yarn, like this:
>
> bin/flink run -m yarn-cluster -yn
Hi all:
I tried to install flink-1.7.2 free hadoop version on Azure with hadoop 2.7.
And when I start to submit a flink job to yarn, like this:
bin/flink run -m yarn-cluster -yn 2 ./examples/batch/WordCount.jar
Exceptions came out
Thanks for your reply, I fix the problem by adding a new user.
Root is not avaliable .
> 在 2019年3月4日,上午11:47,sam peng <624645...@qq.com> 写道:
>
>
> 请教大家一个Hadoop 运行MR问题。
>
> 之前我们配置过一个单点Hadoop,能正常运行。
>
> 目前我们把hadoop 移到生产环境中,将hadoop目录挂载在磁盘中,用flum
Is there . a work around for
https://issues.apache.org/jira/browse/FLINK-10203 or do have to be hadoop
2.7 ?
I need to process some Parquet data from S3 as a unioned input in my
DataStream pipeline. From what I know, this requires using the
hadoop AvroParquetInputFormat. The problem I'm running into is that also
requires using un-shaded hadoop classes that conflict with the Flink shaded
hadoop3
Hi,
In that case, are you sure that your Flink version corresponds to the version
of the flink-hadoop-compatibility jar? It seems that you are using Flink 1.7
for the jar and your cluster needs to run that version as well. IIRC, this
particular class was introduced with 1.7, so using
formation
for the class 'org.apache.hadoop.io.Writable'. You may be missing the
'flink-hadoop-compatibility' dependency.
at
org.apache.flink.api.java.typeutils.TypeExtractor.createHadoopWritableTypeInfo(TypeExtractor.java:2082)
at
org.apache.flink.api.java.typeutils.TypeExtractor.privateGe
ut(ExecutionEnvironment.java:551)
> flipkart.EnrichementFlink.main(EnrichementFlink.java:31)
>
>
> When I add the TypeInformation myself as follows, I run into the same issue.
> DataSource> input = env
> .createInput(HadoopInputs.readSequenceFile(Text.class, Text.class
ichementFlink.java:31)
>
>
> When I add the TypeInformation myself as follows, I run into the same
> issue.
>
> DataSource> input = env
> .createInput(HadoopInputs.readSequenceFile(Text.class, Text.class,
> ravenDataDir));
>
>
>
>
> When
ext.class,
ravenDataDir));
When I add these libraries in the lib folder,
flink-hadoop-compatibility_2.11-1.7.0.jar
the error changes to this
java.lang.NoClassDefFoundError:
org/apache/flink/api/common/typeutils/TypeSerializerSnapshot
at
org.apache.flink.api.java.typeutils.WritableTypeInfo.createS
/presto/hadoop/HadoopFileStatus
at
org.apache.flink.fs.s3presto.shaded.com.facebook.presto.hive.PrestoS3FileSystem.directory(PrestoS3FileSystem.java:446)
at
org.apache.flink.fs.s3presto.shaded.com.facebook.presto.hive.PrestoS3FileSystem.delete(PrestoS3FileSystem.java:423)
at
org.apache.flink.fs.s3
When I try to configure checkpointing using Presto in 1.7.0 the following
exception occurs:
java.lang.NoClassDefFoundError:
org/apache/flink/fs/s3presto/shaded/com/facebook/presto/hadoop/HadoopFileStatus
at
org.apache.flink.fs.s3presto.shaded.com.facebook.presto.hive.PrestoS3FileSystem.directory
We certainly want to look into hadoop3 support for 1.8, but we'll have
to take a look at the changes to hadoop2 first before we can give any
definitive answer.
On 28.11.2018 07:41, Ming Zhang wrote:
Hi All,
now we plan to move CDH6 which is based on hadoop3, anyone knows the
plan of flink
Hi All,
now we plan to move CDH6 which is based on hadoop3, anyone knows the plan
of flink hadoop3 integration?
thanks in advance.
Ming.He
Hi Hequn & Vino,
Finally I rebuild the Flink by change the “hadoop.version” in the pom
file.
Because Flink use maven shaded plugin to shade the Hadoop dependency,
this also means I need to rebuild the hadoop shaded jar each time I upgrade
Flink version.
Best
Henry
> 在
Hi Henry,
You can specify a specific Hadoop version to build against:
> mvn clean install -DskipTests -Dhadoop.version=2.6.1
More details here[1].
Best, Hequn
[1]
https://ci.apache.org/projects/flink/flink-docs-master/flinkDev/building.html#hadoop-versions
On Tue, Oct 30, 2018 at 10:02
aven, how can I adjust the Hadoop version with the Hadoop
> version really used?
> Thanks a lot!!
>
> Best
> Henry
>
> 在 2018年10月26日,上午10:02,vino yang 写道:
>
> Hi Henry,
>
> When running flink on YARN, from ClusterEntrypoint the system environment
> info is print
Hi Vino,
Because I build the project with Maven, maybe I can not use the jars
directly download from the web.
If built with Maven, how can I adjust the Hadoop version with the
Hadoop version really used?
Thanks a lot!!
Best
Henry
> 在 2018年10月26日,上午10:02,vino yang
Hi Henry,
When running flink on YARN, from ClusterEntrypoint the system environment
info is print out.
One of the info is "Hadoop version: 2.4.1”, I think it is from the
flink-shaded-hadoop2 jar. But actually the system Hadoop version is 2.7.2.
I want to know is it OK if the version is diff
Hi Experts
When running flink on YARN, from ClusterEntrypoint the system
environment info is print out.
One of the info is "Hadoop version: 2.4.1”, I think it is from the
flink-shaded-hadoop2 jar. But actually the system Hadoop version is 2.7.2.
I want to know is
stacktrace below.
The shading of hadoop jars started from this ticket FLINK-10366
<https://issues.apache.org/jira/browse/FLINK-10366> . Googling the error
didn't help. Could someone please help me?
Thanks and best regards,
Averell
/Setting HADOOP_CONF_DIR=/etc/hadoop/conf because no H
There is a Pull Request to enable the new streaming sink for Hadoop < 2.7,
so it may become an option in the next release.
Thanks for bearing with us!
Best,
Stephan
On Sat, Sep 22, 2018 at 2:27 PM Paul Lam wrote:
>
> Hi Stephan!
>
> It's bad that I'm using Hadoop 2.6, so
Hi Stephan!
It's bad that I'm using Hadoop 2.6, so I have to stick to the old bucketing
sink. I made it by explicitly setting Hadoop conf for the bucketing sink in
the user code.
Thank you very much!
Best,
Paul Lam
Stephan Ewen 于2018年9月21日周五 下午6:30写道:
> Hi!
>
> The old bucketing
Hi Stefan, Stephan,
Yes, the `hadoop.security.group.mapping` option is explicitly set to
`org.apache.hadoop.security.LdapGroupsMapping`. Guess that was why the
classloader found an unshaded class.
I don’t have the permission to change the Hadoop cluster configurations so I
modified the `core
Hi!
A few questions to diagnose/fix this:
Do you explicitly configure the "hadoop.security.group.mapping"?
- If not, this setting may have leaked in from a Hadoop config in the
classpath. We are fixing this in Flink 1.7, to make this insensitive to
such settings leaking in.
here are conflicts in
> flink-shaded-hadoop2-uber-1.6.0.jar and flink-s3-fs-hadoop-1.6.0.jar, and
> maybe related to class loading orders.
>
> Did anyone meet this problem? Thanks a lot!
>
> The stack traces are as below:
>
> java.io.IOEx
Hi,
I’m using StreamingFileSink of 1.6.0 to write logs to S3 and encounter a
classloader problem. It seems that there are conflicts in
flink-shaded-hadoop2-uber-1.6.0.jar and flink-s3-fs-hadoop-1.6.0.jar, and maybe
related to class loading orders.
Did anyone meet this problem? Thanks a lot
Hi,
I have experience UnsatisfiedLinkError when I tried to use
flink-s3-fs-hadoop to sink to s3 in my local Windows machine.
I googled and tried several solutions like download hadoop.dll and
winutils.exe, set up HADOOP_HOME and PATH environment variables, copy
hadoop.dll to C:\Windows\System32
Hi Vino,
Many thanks.
I can confirm that Flink version flink-1.5.0-bin-hadoop28-scala_2.11 works
fine with Hadoop 3.1.0. I did not need to build it from source.
Regards,
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<ht
Hi Mich,
I think this depends on the backward compatibility of the Hadoop client
API. In theory, there is no problem.
Hadoop 2.8 to Hadoop 3.0 is a very large upgrade, and personally recommend
using a client version that is consistent with the Hadoop cluster.
By compiling and packaging from
Hi,
I can run Flink without bundled Hadoop fine. I was wondering if Flink with
Hadoop 2.8 works with Hadoop 3 as well?
Thanks,
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view
Hi,
Im trying to stream data to Haddop as a csv .
In batch processing i can use HadoopOutputFormat like that (
example/WordCount.java
<https://github.com/apache/flink/blob/master/flink-connectors/flink-hadoop-compatibility/src/test/java/org/apache/flink/test/hadoopcompatibility/mapred
;
>> Thanks
>>
>> On Tue, Jun 5, 2018 at 9:39 AM Aljoscha Krettek
>> wrote:
>>
>>> Hi,
>>>
>>> sorry, yes, you don't have to add any of the Hadoop dependencies.
>>> Everything that's needed comes in the presto s3 jar.
>>>
&g
t; On Tue, Jun 5, 2018 at 9:39 AM Aljoscha Krettek
> wrote:
>
>> Hi,
>>
>> sorry, yes, you don't have to add any of the Hadoop dependencies.
>> Everything that's needed comes in the presto s3 jar.
>>
>> You should use "s3:" as the prefix, the Pr
Aljoscha Krettek wrote:
> Hi,
>
> sorry, yes, you don't have to add any of the Hadoop dependencies.
> Everything that's needed comes in the presto s3 jar.
>
> You should use "s3:" as the prefix, the Presto S3 filesystem will not be
> used if you use s3a. And yes, you
Hi,
sorry, yes, you don't have to add any of the Hadoop dependencies. Everything
that's needed comes in the presto s3 jar.
You should use "s3:" as the prefix, the Presto S3 filesystem will not be used
if you use s3a. And yes, you add config values to the flink config as s3.xxx.
Best
Thanks for pick up my question. I had s3a in the config now I removed it.
I will post a full trace soon, but want to get some questions answered to
help me understand this better.
1. Can I use the presto lib with Flink 1.5 without bundled hdp? Can I use
this?
Hi,
what are you using as the FileSystem scheme? s3 or s3a?
Also, could you also post the full stack trace, please?
Best,
Aljoscha
> On 2. Jun 2018, at 07:34, Hao Sun wrote:
>
> I am trying to figure out how to use S3 as state storage.
> The recommended way is
>
t;
>> Have you enabled checkpointing?
>>
>> Kostas
>>
>> On Jun 5, 2018, at 11:14 AM, miki haiat wrote:
>>
>> Im trying to write some data to Hadoop by using this code
>>
>> The state backend is set without time
>>
>> Sta
l(500L); return this; }
On Tue, Jun 5, 2018 at 1:11 PM Kostas Kloudas
mailto:k.klou...@data-artisans.com>> wrote:
Hi Miki,
Have you enabled checkpointing?
Kostas
On Jun 5, 2018, at 11:14 AM, miki haiat mailto:miko5...@gmail.com>> wrote:
Im trying to write s
as
>
> On Jun 5, 2018, at 11:14 AM, miki haiat wrote:
>
> Im trying to write some data to Hadoop by using this code
>
> The state backend is set without time
>
> StateBackend sb = new
> FsStateBackend("hdfs://***:9000/flink/my_city/checkpoints");
> env
Hi Miki,
Have you enabled checkpointing?
Kostas
> On Jun 5, 2018, at 11:14 AM, miki haiat wrote:
>
> Im trying to write some data to Hadoop by using this code
>
> The state backend is set without time
> StateBackend sb = new
> FsStateBackend("hdfs://***:9000
ing to write some data to Hadoop by using this code
>
> The state backend is set without time
>
> StateBackend sb = new
> FsStateBackend("hdfs://***:9000/flink/my_city/checkpoints");
> env.setStateBackend(sb);
>
> BucketingSink> sink =
> new B
Im trying to write some data to Hadoop by using this code
The state backend is set without time
StateBackend sb = new
FsStateBackend("hdfs://***:9000/flink/my_city/checkpoints");
env.setStateBackend(sb);
BucketingSink> sink =
new BucketingSink<>("hdf
I am trying to figure out how to use S3 as state storage.
The recommended way is
https://ci.apache.org/projects/flink/flink-docs-release-1.5/ops/deployment/aws.html#shaded-hadooppresto-s3-file-systems-recommended
Seems like I only have to do two things:
*1. Put flink-s3-fs-presto to the lib*
*2.
Thanks, in that case it sounds like it is more related to Hadoop classpath
mixups, rather than class loading.
On Mon, Mar 26, 2018 at 3:03 PM, ashish pok <ashish...@yahoo.com> wrote:
> Stephan, we are in 1.4.2.
>
> Thanks,
>
> -- Ashish
>
> On Mon, Mar 26, 2018 at
Stephan, we are in 1.4.2.
Thanks,
-- Ashish
On Mon, Mar 26, 2018 at 7:38 AM, Stephan Ewen<se...@apache.org> wrote: If
you are on Flink 1.4.0 or 1.4.1, please check if you accidentally have Hadoop
in your application jar. That can mess up things with child-first classloading.
If you are on Flink 1.4.0 or 1.4.1, please check if you accidentally have
Hadoop in your application jar. That can mess up things with child-first
classloading. 1.4.2 should handle Hadoop properly in any case.
On Sun, Mar 25, 2018 at 3:26 PM, Ashish Pokharel <ashish...@yahoo.com>
wrote:
&g
Hi Ken,
Yes - we are on 1.4. Thanks for that link - it certainly now explains how
things are working :)
We currently don’t have HADOOP_CLASSPATH env var setup and “hadoop class path”
command basically points to HDP2.6 locations (HDP = Hortonworks Data Platform).
Best guess I have
Hi Ashish,
Are you using Flink 1.4? If so, what does the “hadoop classpath” command return
from the command line where you’re trying to start the job?
Asking because I’d run into issues with
https://issues.apache.org/jira/browse/FLINK-7477
<https://issues.apache.org/jira/browse/FLINK-7
Hi All,
Looks like we are out of the woods for now (so we think) - we went with Hadoop
free version and relied on client libraries on edge node.
However, I am still not very confident as I started digging into that stack as
well and realized what Till pointed out (traces leads to a class
Hi Ashish,
Yeah, we also had this problem before.
It can be solved by recompiling Flink with HDP version of Hadoop
according to instruction here:
https://ci.apache.org/projects/flink/flink-docs-release-1.4/start/building.html#vendor-specific-versions
Regards,
Kien
On 3/22/2018 12:25 AM
Hi Ashish,
the class `RequestHedgingRMFailoverProxyProvider` was only introduced with
Hadoop 2.9.0. My suspicion is thus that you start the client with some
Hadoop 2.9.0 dependencies on the class path. Could you please check the
logs of the client what's on its class path? Maybe you could also
Hi Piotrek,
At this point we are simply trying to start a YARN session.
BTW, we are on Hortonworks HDP 2.6 which is on 2.7 Hadoop if anyone has
experienced similar issues.
We actually pulled 2.6 binaries for the heck of it and ran into same issues.
I guess we are left with getting non-hadoop
Hi Piotrek,
Yes, this is a brand new Prod environment. 2.6 was in our lab.
Thanks,
-- Ashish
On Wed, Mar 21, 2018 at 11:39 AM, Piotr Nowojski<pi...@data-artisans.com>
wrote: Hi,
Have you replaced all of your old Flink binaries with freshly downloaded Hadoop
2.7 versions? Are yo
Hi,
Have you replaced all of your old Flink binaries with freshly downloaded
<https://flink.apache.org/downloads.html> Hadoop 2.7 versions? Are you sure
that something hasn't mix in the process?
Does some simple word count example works on the cluster after the upgrade?
Piotrek
>
Hi All,
We ran into a roadblock in our new Hadoop environment, migrating from 2.6 to
2.7. It was supposed to be an easy lift to get a YARN session but doesnt seem
like :) We definitely are using 2.7 binaries but it looks like there is a call
here to a private methos which screams runtime
That is definitely a good thing to have, would like to have a discussion
about how to approach that after 1.5 is released.
On Wed, Mar 21, 2018 at 5:39 AM, Jayant Ameta wrote:
>
> Jayant Ameta
>
Jayant Ameta
Hi,
You can build Flink against Hadoop 2.9:
https://issues.apache.org/jira/browse/FLINK-8177
<https://issues.apache.org/jira/browse/FLINK-8177>
It seems like convenience binaries will be built by us only since 1.5:
https://issues.apache.org/jira/browse/FLINK-8363
<https://issues.a
No. It is just a log message with no apparent side effects.
-
Best Regards,
Pedro Chaves
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
I mean Flink 1.4
On Thursday, March 1, 2018, Soheil Pourbafrani
wrote:
> ?
55)
> But the job keeps running, apparently without issues./
>
> Flink Version: 1.4.0 bundled with Hadoop 2.8
>
> Hadoop version: 2.8.3
>
> Any ideas on what might be the problem?
>
>
>
> -
> Best Regards,
> Pedro Chaves
> --
> Sent from: http://apache-flink-user-mailing-list-archive.2336050.
> n4.nabble.com/
>
?
(DataStreamer.java:755)
But the job keeps running, apparently without issues./
Flink Version: 1.4.0 bundled with Hadoop 2.8
Hadoop version: 2.8.3
Any ideas on what might be the problem?
-
Best Regards,
Pedro Chaves
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4
f.yaml, the change to put all jars returned by “hadoop
classpath” on the classpath means that classes in these jars are found before
the classes in my shaded Flink uber jar.
If I ensure that I don’t have the “hadoop” command set up on my Bash path, then
I don’t run into this issue.
Does this make
.it>:
> Do you think is that complex to support it? I think we can try to
> implement it if someone could give us some support (at least some big
> picture)
>
> On Tue, Jan 16, 2018 at 10:02 AM, Fabian Hueske <fhue...@gmail.com> wrote:
>
>> No, I'm not aware of any
Do you think is that complex to support it? I think we can try to implement
it if someone could give us some support (at least some big picture)
On Tue, Jan 16, 2018 at 10:02 AM, Fabian Hueske <fhue...@gmail.com> wrote:
> No, I'm not aware of anybody working on extending the Hadoop comp
No, I'm not aware of anybody working on extending the Hadoop compatibility
support.
I'll also have no time to work on this any time soon :-(
2018-01-13 1:34 GMT+01:00 Flavio Pompermaier <pomperma...@okkam.it>:
> Any progress on this Fabian? HBase bulk loading is a common task for us
&
nd custom grouping /
>>> sorting functions for Combiners are missing if I remember correctly).
>>> I don't think that anybody is working on that right now, but it would
>>> definitely be a cool feature.
>>>
>>> 2015-04-10 11:55 GMT+02:00 Flavio Pompermaier <p
ver FsStateBackend:
>
> 01/11/2018 09:27:22 Job execution switched to status FAILING.
> org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not
> find a file system implementation for scheme 'hdfs'. The scheme is not
> directly supported by Flink and no Hadoop file syst
implementation for scheme 'hdfs'. The scheme is not
directly supported by Flink and no Hadoop file system to support this
scheme could be loaded.
at
org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:405)
at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:320
Hi Sasha,
you're right that if you want to access HDFS from the user code only it
should be possible to use the Hadoop free Flink version and bundle the
Hadoop dependencies with your user code. However, if you want to use
Flink's file system state backend as you did, then you have to start
Hello guys,
want to clarify for myself: since flink 1.4.0 allows to use hadoop-free
distribution and dynamic hadoop dependencies loading, I suppose that if to
download hadoop-free distribution, start cluster without any hadoop and
then load any job's jar which has some hadoop dependencies (i
used
Thanks for investigating this, Jared. I would summarize it as
Flink-on-Mesos cannot be used in Hadoop-free mode in Flink 1.4.0. I filed
an improvement bug to support this scenario: FLINK-8247
On Tue, Dec 12, 2017 at 11:46 AM, Jared Stehler <
jared.steh...@intellifylearning.com> wrote
- Intellify Learning
o: 617.701.6330 x703
> On Dec 12, 2017, at 2:10 PM, Chesnay Schepler <ches...@apache.org> wrote:
>
> Could you look into the flink-shaded-hadoop jar to check whether the missing
> class is actually contained?
>
> Where did the flink-shaded-hadoop j
${flink.version}
--
Jared Stehler
Chief Architect - Intellify Learning
o: 617.701.6330 x703
> On Dec 12, 2017, at 2:10 PM, Chesnay Schepler <ches...@apache.org> wrote:
>
> Could you look into the flink-shaded-hadoop jar to check whether the missing
> class is actually contain
Could you look into the flink-shaded-hadoop jar to check whether the
missing class is actually contained?
Where did the flink-shaded-hadoop jar come from? I'm asking because when
building flink-dist from source the jar is called
flink-shaded-hadoop2-uber-1.4.0.jar, which does indeed contain
After upgrading to flink 1.4.0 using the hadoop-free build option, I’m seeing
the following error on startup in the app master:
2017-12-12 18:23:15.473 [main] ERROR
o.a.f.m.r.clusterframework.MesosApplicationMasterRunner - Mesos JobManager
initialization failed
<https://internal.d
Kloudas <k.klou...@data-artisans.com
> <mailto:k.klou...@data-artisans.com>>
> Enviat el: dimecres, 29 de novembre de 2017 11:15:16
> Per a: ORIOL LOPEZ SANCHEZ
> A/c: user@flink.apache.org <mailto:user@flink.apache.org>
> Tema: Re: Are there plans to support
a lot.
De: Kostas Kloudas <k.klou...@data-artisans.com>
Enviat el: dimecres, 29 de novembre de 2017 11:15:16
Per a: ORIOL LOPEZ SANCHEZ
A/c: user@flink.apache.org
Tema: Re: Are there plans to support Hadoop 2.9.0 on near future?
Hi Oriol,
As you may have see
Hi Oriol,
As you may have seen form the mailing list we are currently in the process of
releasing Flink 1.4. This is going
to be a hadoop-free distribution which means that it should work with any
hadoop version, including Hadoop 2.9.0.
Given this, I would recommend to try out the release
Hi,
I am running Flink batch job on Standalone Cluster (16 nodes), on top of
Hadoop.
The chain looks like:
DataSet1 = env.readTextFile (csv on hdfs)
.map
.flatMap
.groupBy
.reduce
.map
.writeAsCsv (DataSet 1)
DataSet2 = env.readTextFile
.map
.flatMap
env.readCsvFile (DataSet1)
DataSet1
Can you check the following config in yarn-site.xml ?
yarn.resourcemanager.proxy-user-privileges.enabled (true)
Cheers
On Wed, Aug 16, 2017 at 4:48 PM, Raja.Aravapalli <raja.aravapa...@target.com
> wrote:
>
>
> Hi,
>
>
>
> I triggered an flink yarn-session
Is the kerberos token expired without renewing?
> On Aug 16, 2017, at 7:48 PM, Raja.Aravapalli <raja.aravapa...@target.com>
> wrote:
>
>
> Hi,
>
> I triggered an flink yarn-session on a running Hadoop cluster… and triggering
> streaming application on that.
&
Hi,
I triggered an flink yarn-session on a running Hadoop cluster… and triggering
streaming application on that.
But, I see after few days of running without any issues, the flink application
which is writing data to hdfs failing with below exception.
Caused
.8.0 and
> it worked. It would be nice to have 2.8 binaries since Hadoop 2.8.1 is also
> released
>
> Mustafa Akın
> www.mustafaak.in <http://www.mustafaak.in/>
>
> On Wed, Aug 9, 2017 at 9:00 PM, Eron Wright <eronwri...@gmail.com
> <mailto:eronwri...@gmai
Yes, it would probably work. I cloned master repo and compiled with 2.8.0
and it worked. It would be nice to have 2.8 binaries since Hadoop 2.8.1 is
also released
Mustafa Akın
www.mustafaak.in
On Wed, Aug 9, 2017 at 9:00 PM, Eron Wright <eronwri...@gmail.com> wrote:
> For reference: [F
For reference: [FLINK-6466] Build Hadoop 2.8.0 convenience binaries
On Wed, Aug 9, 2017 at 6:41 AM, Aljoscha Krettek <aljos...@apache.org>
wrote:
> So you're saying that this works if you manually compile Flink for Hadoop
> 2.8.0? If yes, I think the solution is that we have to prov
So you're saying that this works if you manually compile Flink for Hadoop
2.8.0? If yes, I think the solution is that we have to provide binaries for
Hadoop 2.8.0. If we did that with a possible Flink 1.3.3 release and starting
from Flink 1.4.0, would this be an option for you?
Best,
Aljoscha
Hi all,
I am trying to use S3 backend with custom endpoint. However, it is not
supported in hadoop-aws@2.7.3, I need to use at least 2.8.0 version. The
underyling reason is that the requests are being sent as following
DEBUG [main] (AmazonHttpClient.java:337) - Sending Request: HEAD
http
/CDH/lib/hbase/lib/guava-12.0.1.jar
\/home/.../text-assembly-0.1.0.jar
hdfs:///user/hadoop/wenhao/xj/wenda_search_aggregate_page.txt
and have a error:Caused by: java.lang.ClassNotFoundException:
org.apache.hadoop.hbase.filter.Filter
but use flink-1.1.1 it can run success,some one can tell me
I don't have it any more, unfortunately. To be clear, I don't think it was
Flink related, but a collision between a Hadoop security library calling
into a Google Guava library, where a method was missing on CacheBuilder in
the latter. Also, to add to the irritation, it only happened in my OSX
201 - 300 of 386 matches
Mail list logo