Re: inconsistent module descriptor for hadoop uber jar (Flink 1.8)

2019-04-16 Thread Hao Sun
t; > Best, > Gary > > On Mon, Apr 15, 2019 at 9:26 PM Hao Sun wrote: > >> Hi, I can not find the root cause of this, I think hadoop version is >> mixed up between libs somehow. >> >> --- ERROR --- >> java.text.ParseException: inconsistent module descriptor

Re: inconsistent module descriptor for hadoop uber jar (Flink 1.8)

2019-04-16 Thread Gary Yao
Hi, Can you describe how to reproduce this? Best, Gary On Mon, Apr 15, 2019 at 9:26 PM Hao Sun wrote: > Hi, I can not find the root cause of this, I think hadoop version is mixed > up between libs somehow. > > --- ERROR --- > java.text.ParseException: inconsistent module descri

inconsistent module descriptor for hadoop uber jar (Flink 1.8)

2019-04-15 Thread Hao Sun
Hi, I can not find the root cause of this, I think hadoop version is mixed up between libs somehow. --- ERROR --- java.text.ParseException: inconsistent module descriptor file found in ' https://repo1.maven.org/maven2/org/apache/flink/flink-shaded-hadoop2-uber/2.8.3-1.8.0/flink-shaded-hadoop2

Is copying flink-hadoop-compatibility jar to FLINK_HOME/lib the only way to make it work?

2019-04-13 Thread morven huang
Hi, I’m using Flink 1.5.6 and Hadoop 2.7.1. My requirement is to read hdfs sequence file (SequenceFileInputFormat), then write it back to hdfs (SequenceFileAsBinaryOutputFormat with compression). Below code won’t work until I copy the flink-hadoop-compatibility jar to FLINK_HOME/lib. I find

Re: Is copying flink-hadoop-compatibility jar to FLINK_HOME/lib the only way to make it work?

2019-04-10 Thread Morven Huang
uf-java 2.5.0 org.apache.flink flink-hadoop-compatibility_${scala.binary.version} ${flink.version} org.slf4j slf4j-api 1.7.7 org.slf4j slf4j-log4j12 1.7.7 runtime log4j log4j 1.2.17 runtime org.apache.maven.plugins maven-compiler-plugin 3.1 ${java.version} $

Re: Is copying flink-hadoop-compatibility jar to FLINK_HOME/lib the only way to make it work?

2019-04-10 Thread Fabian Hueske
Hi, Packaging the flink-hadoop-compatibility dependency with your code into a "fat" job jar should work as well. Best, Fabian Am Mi., 10. Apr. 2019 um 15:08 Uhr schrieb Morven Huang < morven.hu...@gmail.com>: > Hi, > > > > I’m using Flink 1.5.6 and Hadoop

Is copying flink-hadoop-compatibility jar to FLINK_HOME/lib the only way to make it work?

2019-04-10 Thread Morven Huang
Hi, I’m using Flink 1.5.6 and Hadoop 2.7.1. *My requirement is to read hdfs sequence file (SequenceFileInputFormat), then write it back to hdfs (SequenceFileAsBinaryOutputFormat with compression).* Below code won’t work until I copy the flink-hadoop-compatibility jar to FLINK_HOME/lib. I

Re: Install flink-1.7.2 on Azure with Hadoop 2.7 failed

2019-04-03 Thread Guowei Ma
月4日周四 上午2:04写道: > Hi all: > > I tried to install flink-1.7.2 free hadoop version on Azure with hadoop > 2.7. > > And when I start to submit a flink job to yarn, like this: > > bin/flink run -m yarn-cluster -yn

Install flink-1.7.2 on Azure with Hadoop 2.7 failed

2019-04-03 Thread Reminia Scarlet
Hi all: I tried to install flink-1.7.2 free hadoop version on Azure with hadoop 2.7. And when I start to submit a flink job to yarn, like this: bin/flink run -m yarn-cluster -yn 2 ./examples/batch/WordCount.jar Exceptions came out

Re: Hadoop 运行 mr 程序 报错

2019-03-04 Thread sam peng
Thanks for your reply, I fix the problem by adding a new user. Root is not avaliable . > 在 2019年3月4日,上午11:47,sam peng <624645...@qq.com> 写道: > > > 请教大家一个Hadoop 运行MR问题。 > > 之前我们配置过一个单点Hadoop,能正常运行。 > > 目前我们把hadoop 移到生产环境中,将hadoop目录挂载在磁盘中,用flum

1.7.1 and hadoop pre 2.7

2019-02-05 Thread Vishal Santoshi
Is there . a work around for https://issues.apache.org/jira/browse/FLINK-10203 or do have to be hadoop 2.7 ?

how to use Hadoop Inputformats with flink shaded s3?

2019-01-31 Thread Cliff Resnick
I need to process some Parquet data from S3 as a unioned input in my DataStream pipeline. From what I know, this requires using the hadoop AvroParquetInputFormat. The problem I'm running into is that also requires using un-shaded hadoop classes that conflict with the Flink shaded hadoop3

Re: Error while reading from hadoop sequence file

2018-12-13 Thread Stefan Richter
Hi, In that case, are you sure that your Flink version corresponds to the version of the flink-hadoop-compatibility jar? It seems that you are using Flink 1.7 for the jar and your cluster needs to run that version as well. IIRC, this particular class was introduced with 1.7, so using

Re: Error while reading from hadoop sequence file

2018-12-11 Thread Akshay Mendole
formation for the class 'org.apache.hadoop.io.Writable'. You may be missing the 'flink-hadoop-compatibility' dependency. at org.apache.flink.api.java.typeutils.TypeExtractor.createHadoopWritableTypeInfo(TypeExtractor.java:2082) at org.apache.flink.api.java.typeutils.TypeExtractor.privateGe

Re: Error while reading from hadoop sequence file

2018-12-11 Thread Stefan Richter
ut(ExecutionEnvironment.java:551) > flipkart.EnrichementFlink.main(EnrichementFlink.java:31) > > > When I add the TypeInformation myself as follows, I run into the same issue. > DataSource> input = env > .createInput(HadoopInputs.readSequenceFile(Text.class, Text.class

Re: Error while reading from hadoop sequence file

2018-12-10 Thread Akshay Mendole
ichementFlink.java:31) > > > When I add the TypeInformation myself as follows, I run into the same > issue. > > DataSource> input = env > .createInput(HadoopInputs.readSequenceFile(Text.class, Text.class, > ravenDataDir)); > > > > > When

Error while reading from hadoop sequence file

2018-12-10 Thread Akshay Mendole
ext.class, ravenDataDir)); When I add these libraries in the lib folder, flink-hadoop-compatibility_2.11-1.7.0.jar the error changes to this java.lang.NoClassDefFoundError: org/apache/flink/api/common/typeutils/TypeSerializerSnapshot at org.apache.flink.api.java.typeutils.WritableTypeInfo.createS

Re: flink-s3-fs-presto:1.7.0 is missing shaded com/facebook/presto/hadoop

2018-12-06 Thread Chesnay Schepler
/presto/hadoop/HadoopFileStatus at org.apache.flink.fs.s3presto.shaded.com.facebook.presto.hive.PrestoS3FileSystem.directory(PrestoS3FileSystem.java:446) at org.apache.flink.fs.s3presto.shaded.com.facebook.presto.hive.PrestoS3FileSystem.delete(PrestoS3FileSystem.java:423) at org.apache.flink.fs.s3

flink-s3-fs-presto:1.7.0 is missing shaded com/facebook/presto/hadoop

2018-12-06 Thread Sergei Poganshev
When I try to configure checkpointing using Presto in 1.7.0 the following exception occurs: java.lang.NoClassDefFoundError: org/apache/flink/fs/s3presto/shaded/com/facebook/presto/hadoop/HadoopFileStatus at org.apache.flink.fs.s3presto.shaded.com.facebook.presto.hive.PrestoS3FileSystem.directory

Re: flink hadoop 3 integration plan

2018-11-28 Thread Chesnay Schepler
We certainly want to look into hadoop3 support for 1.8, but we'll have to take a look at the changes to hadoop2 first before we can give any definitive answer. On 28.11.2018 07:41, Ming Zhang wrote: Hi All, now we plan to move CDH6 which is based on hadoop3, anyone knows the plan of flink

flink hadoop 3 integration plan

2018-11-27 Thread Ming Zhang
Hi All, now we plan to move CDH6 which is based on hadoop3, anyone knows the plan of flink hadoop3 integration? thanks in advance. Ming.He

Re: How to tune Hadoop version in flink shaded jar to Hadoop version actually used?

2018-10-30 Thread 徐涛
Hi Hequn & Vino, Finally I rebuild the Flink by change the “hadoop.version” in the pom file. Because Flink use maven shaded plugin to shade the Hadoop dependency, this also means I need to rebuild the hadoop shaded jar each time I upgrade Flink version. Best Henry > 在

Re: How to tune Hadoop version in flink shaded jar to Hadoop version actually used?

2018-10-29 Thread Hequn Cheng
Hi Henry, You can specify a specific Hadoop version to build against: > mvn clean install -DskipTests -Dhadoop.version=2.6.1 More details here[1]. Best, Hequn [1] https://ci.apache.org/projects/flink/flink-docs-master/flinkDev/building.html#hadoop-versions On Tue, Oct 30, 2018 at 10:02

Re: How to tune Hadoop version in flink shaded jar to Hadoop version actually used?

2018-10-29 Thread vino yang
aven, how can I adjust the Hadoop version with the Hadoop > version really used? > Thanks a lot!! > > Best > Henry > > 在 2018年10月26日,上午10:02,vino yang 写道: > > Hi Henry, > > When running flink on YARN, from ClusterEntrypoint the system environment > info is print

Re: How to tune Hadoop version in flink shaded jar to Hadoop version actually used?

2018-10-29 Thread 徐涛
Hi Vino, Because I build the project with Maven, maybe I can not use the jars directly download from the web. If built with Maven, how can I adjust the Hadoop version with the Hadoop version really used? Thanks a lot!! Best Henry > 在 2018年10月26日,上午10:02,vino yang

Re: How to tune Hadoop version in flink shaded jar to Hadoop version actually used?

2018-10-25 Thread vino yang
Hi Henry, When running flink on YARN, from ClusterEntrypoint the system environment info is print out. One of the info is "Hadoop version: 2.4.1”, I think it is from the flink-shaded-hadoop2 jar. But actually the system Hadoop version is 2.7.2. I want to know is it OK if the version is diff

How to tune Hadoop version in flink shaded jar to Hadoop version actually used?

2018-10-25 Thread 徐涛
Hi Experts When running flink on YARN, from ClusterEntrypoint the system environment info is print out. One of the info is "Hadoop version: 2.4.1”, I think it is from the flink-shaded-hadoop2 jar. But actually the system Hadoop version is 2.7.2. I want to know is

"unable to establish the security context" with shaded Hadoop S3

2018-10-05 Thread Averell
stacktrace below. The shading of hadoop jars started from this ticket FLINK-10366 <https://issues.apache.org/jira/browse/FLINK-10366> . Googling the error didn't help. Could someone please help me? Thanks and best regards, Averell /Setting HADOOP_CONF_DIR=/etc/hadoop/conf because no H

Re: S3 connector Hadoop class mismatch

2018-09-23 Thread Stephan Ewen
There is a Pull Request to enable the new streaming sink for Hadoop < 2.7, so it may become an option in the next release. Thanks for bearing with us! Best, Stephan On Sat, Sep 22, 2018 at 2:27 PM Paul Lam wrote: > > Hi Stephan! > > It's bad that I'm using Hadoop 2.6, so

Re: S3 connector Hadoop class mismatch

2018-09-22 Thread Paul Lam
Hi Stephan! It's bad that I'm using Hadoop 2.6, so I have to stick to the old bucketing sink. I made it by explicitly setting Hadoop conf for the bucketing sink in the user code. Thank you very much! Best, Paul Lam Stephan Ewen 于2018年9月21日周五 下午6:30写道: > Hi! > > The old bucketing

Re: S3 connector Hadoop class mismatch

2018-09-21 Thread Paul Lam
Hi Stefan, Stephan, Yes, the `hadoop.security.group.mapping` option is explicitly set to `org.apache.hadoop.security.LdapGroupsMapping`. Guess that was why the classloader found an unshaded class. I don’t have the permission to change the Hadoop cluster configurations so I modified the `core

Re: S3 connector Hadoop class mismatch

2018-09-20 Thread Stephan Ewen
Hi! A few questions to diagnose/fix this: Do you explicitly configure the "hadoop.security.group.mapping"? - If not, this setting may have leaked in from a Hadoop config in the classpath. We are fixing this in Flink 1.7, to make this insensitive to such settings leaking in.

Re: S3 connector Hadoop class mismatch

2018-09-20 Thread Stefan Richter
here are conflicts in > flink-shaded-hadoop2-uber-1.6.0.jar and flink-s3-fs-hadoop-1.6.0.jar, and > maybe related to class loading orders. > > Did anyone meet this problem? Thanks a lot! > > The stack traces are as below: > > java.io.IOEx

S3 connector Hadoop class mismatch

2018-09-19 Thread Paul Lam
Hi, I’m using StreamingFileSink of 1.6.0 to write logs to S3 and encounter a classloader problem. It seems that there are conflicts in flink-shaded-hadoop2-uber-1.6.0.jar and flink-s3-fs-hadoop-1.6.0.jar, and maybe related to class loading orders. Did anyone meet this problem? Thanks a lot

UnsatisfiedLinkError when using flink-s3-fs-hadoop

2018-09-10 Thread yinhua.dai
Hi, I have experience UnsatisfiedLinkError when I tried to use flink-s3-fs-hadoop to sink to s3 in my local Windows machine. I googled and tried several solutions like download hadoop.dll and winutils.exe, set up HADOOP_HOME and PATH environment variables, copy hadoop.dll to C:\Windows\System32

Re: Does Flink release Hadoop(R) 2.8 work with Hadoop 3?

2018-07-28 Thread Mich Talebzadeh
Hi Vino, Many thanks. I can confirm that Flink version flink-1.5.0-bin-hadoop28-scala_2.11 works fine with Hadoop 3.1.0. I did not need to build it from source. Regards, Dr Mich Talebzadeh LinkedIn * https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw <ht

Re: Does Flink release Hadoop(R) 2.8 work with Hadoop 3?

2018-07-28 Thread vino yang
Hi Mich, I think this depends on the backward compatibility of the Hadoop client API. In theory, there is no problem. Hadoop 2.8 to Hadoop 3.0 is a very large upgrade, and personally recommend using a client version that is consistent with the Hadoop cluster. By compiling and packaging from

Does Flink release Hadoop(R) 2.8 work with Hadoop 3?

2018-07-27 Thread Mich Talebzadeh
Hi, I can run Flink without bundled Hadoop fine. I was wondering if Flink with Hadoop 2.8 works with Hadoop 3 as well? Thanks, Dr Mich Talebzadeh LinkedIn * https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw <https://www.linkedin.com/profile/view

Writing csv to Hadoop Data stream

2018-06-11 Thread miki haiat
Hi, Im trying to stream data to Haddop as a csv . In batch processing i can use HadoopOutputFormat like that ( example/WordCount.java <https://github.com/apache/flink/blob/master/flink-connectors/flink-hadoop-compatibility/src/test/java/org/apache/flink/test/hadoopcompatibility/mapred

Re: Do I still need hadoop-aws libs when using Flink 1.5 and Presto?

2018-06-05 Thread Hao Sun
; >> Thanks >> >> On Tue, Jun 5, 2018 at 9:39 AM Aljoscha Krettek >> wrote: >> >>> Hi, >>> >>> sorry, yes, you don't have to add any of the Hadoop dependencies. >>> Everything that's needed comes in the presto s3 jar. >>> &g

Re: Do I still need hadoop-aws libs when using Flink 1.5 and Presto?

2018-06-05 Thread Hao Sun
t; On Tue, Jun 5, 2018 at 9:39 AM Aljoscha Krettek > wrote: > >> Hi, >> >> sorry, yes, you don't have to add any of the Hadoop dependencies. >> Everything that's needed comes in the presto s3 jar. >> >> You should use "s3:" as the prefix, the Pr

Re: Do I still need hadoop-aws libs when using Flink 1.5 and Presto?

2018-06-05 Thread Hao Sun
Aljoscha Krettek wrote: > Hi, > > sorry, yes, you don't have to add any of the Hadoop dependencies. > Everything that's needed comes in the presto s3 jar. > > You should use "s3:" as the prefix, the Presto S3 filesystem will not be > used if you use s3a. And yes, you

Re: Do I still need hadoop-aws libs when using Flink 1.5 and Presto?

2018-06-05 Thread Aljoscha Krettek
Hi, sorry, yes, you don't have to add any of the Hadoop dependencies. Everything that's needed comes in the presto s3 jar. You should use "s3:" as the prefix, the Presto S3 filesystem will not be used if you use s3a. And yes, you add config values to the flink config as s3.xxx. Best

Re: Do I still need hadoop-aws libs when using Flink 1.5 and Presto?

2018-06-05 Thread Hao Sun
Thanks for pick up my question. I had s3a in the config now I removed it. I will post a full trace soon, but want to get some questions answered to help me understand this better. 1. Can I use the presto lib with Flink 1.5 without bundled hdp? Can I use this?

Re: Do I still need hadoop-aws libs when using Flink 1.5 and Presto?

2018-06-05 Thread Aljoscha Krettek
Hi, what are you using as the FileSystem scheme? s3 or s3a? Also, could you also post the full stack trace, please? Best, Aljoscha > On 2. Jun 2018, at 07:34, Hao Sun wrote: > > I am trying to figure out how to use S3 as state storage. > The recommended way is >

Re: Writing stream to Hadoop

2018-06-05 Thread miki haiat
t; >> Have you enabled checkpointing? >> >> Kostas >> >> On Jun 5, 2018, at 11:14 AM, miki haiat wrote: >> >> Im trying to write some data to Hadoop by using this code >> >> The state backend is set without time >> >> Sta

Re: Writing stream to Hadoop

2018-06-05 Thread Chesnay Schepler
l(500L); return this; } On Tue, Jun 5, 2018 at 1:11 PM Kostas Kloudas mailto:k.klou...@data-artisans.com>> wrote: Hi Miki, Have you enabled checkpointing? Kostas On Jun 5, 2018, at 11:14 AM, miki haiat mailto:miko5...@gmail.com>> wrote: Im trying to write s

Re: Writing stream to Hadoop

2018-06-05 Thread miki haiat
as > > On Jun 5, 2018, at 11:14 AM, miki haiat wrote: > > Im trying to write some data to Hadoop by using this code > > The state backend is set without time > > StateBackend sb = new > FsStateBackend("hdfs://***:9000/flink/my_city/checkpoints"); > env

Re: Writing stream to Hadoop

2018-06-05 Thread Kostas Kloudas
Hi Miki, Have you enabled checkpointing? Kostas > On Jun 5, 2018, at 11:14 AM, miki haiat wrote: > > Im trying to write some data to Hadoop by using this code > > The state backend is set without time > StateBackend sb = new > FsStateBackend("hdfs://***:9000

Re: Writing stream to Hadoop

2018-06-05 Thread Marvin777
ing to write some data to Hadoop by using this code > > The state backend is set without time > > StateBackend sb = new > FsStateBackend("hdfs://***:9000/flink/my_city/checkpoints"); > env.setStateBackend(sb); > > BucketingSink> sink = > new B

Writing stream to Hadoop

2018-06-05 Thread miki haiat
Im trying to write some data to Hadoop by using this code The state backend is set without time StateBackend sb = new FsStateBackend("hdfs://***:9000/flink/my_city/checkpoints"); env.setStateBackend(sb); BucketingSink> sink = new BucketingSink<>("hdf

Do I still need hadoop-aws libs when using Flink 1.5 and Presto?

2018-06-01 Thread Hao Sun
I am trying to figure out how to use S3 as state storage. The recommended way is https://ci.apache.org/projects/flink/flink-docs-release-1.5/ops/deployment/aws.html#shaded-hadooppresto-s3-file-systems-recommended Seems like I only have to do two things: *1. Put flink-s3-fs-presto to the lib* *2.

Re: Error running on Hadoop 2.7

2018-03-27 Thread Stephan Ewen
Thanks, in that case it sounds like it is more related to Hadoop classpath mixups, rather than class loading. On Mon, Mar 26, 2018 at 3:03 PM, ashish pok <ashish...@yahoo.com> wrote: > Stephan, we are in 1.4.2. > > Thanks, > > -- Ashish > > On Mon, Mar 26, 2018 at

Re: Error running on Hadoop 2.7

2018-03-26 Thread ashish pok
Stephan, we are in 1.4.2. Thanks, -- Ashish On Mon, Mar 26, 2018 at 7:38 AM, Stephan Ewen<se...@apache.org> wrote: If you are on Flink 1.4.0 or 1.4.1, please check if you accidentally have Hadoop in your application jar. That can mess up things with child-first classloading.

Re: Error running on Hadoop 2.7

2018-03-26 Thread Stephan Ewen
If you are on Flink 1.4.0 or 1.4.1, please check if you accidentally have Hadoop in your application jar. That can mess up things with child-first classloading. 1.4.2 should handle Hadoop properly in any case. On Sun, Mar 25, 2018 at 3:26 PM, Ashish Pokharel <ashish...@yahoo.com> wrote: &g

Re: Error running on Hadoop 2.7

2018-03-25 Thread Ashish Pokharel
Hi Ken, Yes - we are on 1.4. Thanks for that link - it certainly now explains how things are working :) We currently don’t have HADOOP_CLASSPATH env var setup and “hadoop class path” command basically points to HDP2.6 locations (HDP = Hortonworks Data Platform). Best guess I have

Re: Error running on Hadoop 2.7

2018-03-22 Thread Ken Krugler
Hi Ashish, Are you using Flink 1.4? If so, what does the “hadoop classpath” command return from the command line where you’re trying to start the job? Asking because I’d run into issues with https://issues.apache.org/jira/browse/FLINK-7477 <https://issues.apache.org/jira/browse/FLINK-7

Re: Error running on Hadoop 2.7

2018-03-22 Thread Ashish Pokharel
Hi All, Looks like we are out of the woods for now (so we think) - we went with Hadoop free version and relied on client libraries on edge node. However, I am still not very confident as I started digging into that stack as well and realized what Till pointed out (traces leads to a class

Re: Error running on Hadoop 2.7

2018-03-22 Thread Kien Truong
Hi Ashish, Yeah, we also had this problem before. It can be solved by recompiling Flink with HDP version of Hadoop according to instruction here: https://ci.apache.org/projects/flink/flink-docs-release-1.4/start/building.html#vendor-specific-versions Regards, Kien On 3/22/2018 12:25 AM

Re: Error running on Hadoop 2.7

2018-03-22 Thread Till Rohrmann
Hi Ashish, the class `RequestHedgingRMFailoverProxyProvider` was only introduced with Hadoop 2.9.0. My suspicion is thus that you start the client with some Hadoop 2.9.0 dependencies on the class path. Could you please check the logs of the client what's on its class path? Maybe you could also

Re: Error running on Hadoop 2.7

2018-03-21 Thread ashish pok
Hi Piotrek, At this point we are simply trying to start a YARN session.  BTW, we are on Hortonworks HDP 2.6 which is on 2.7 Hadoop if anyone has experienced similar issues.  We actually pulled 2.6 binaries for the heck of it and ran into same issues.  I guess we are left with getting non-hadoop

Re: Error running on Hadoop 2.7

2018-03-21 Thread ashish pok
Hi Piotrek, Yes, this is a brand new Prod environment. 2.6 was in our lab. Thanks, -- Ashish On Wed, Mar 21, 2018 at 11:39 AM, Piotr Nowojski<pi...@data-artisans.com> wrote: Hi, Have you replaced all of your old Flink binaries with freshly downloaded Hadoop 2.7 versions? Are yo

Re: Error running on Hadoop 2.7

2018-03-21 Thread Piotr Nowojski
Hi, Have you replaced all of your old Flink binaries with freshly downloaded <https://flink.apache.org/downloads.html> Hadoop 2.7 versions? Are you sure that something hasn't mix in the process? Does some simple word count example works on the cluster after the upgrade? Piotrek >

Error running on Hadoop 2.7

2018-03-21 Thread ashish pok
Hi All, We ran into a roadblock in our new Hadoop environment, migrating from 2.6 to 2.7. It was supposed to be an easy lift to get a YARN session but doesnt seem like :) We definitely are using 2.7 binaries but it looks like there is a call here to a private methos which screams runtime

Re: Is Hadoop 3.0 integration planned?

2018-03-21 Thread Stephan Ewen
That is definitely a good thing to have, would like to have a discussion about how to approach that after 1.5 is released. On Wed, Mar 21, 2018 at 5:39 AM, Jayant Ameta wrote: > > Jayant Ameta >

Is Hadoop 3.0 integration planned?

2018-03-20 Thread Jayant Ameta
Jayant Ameta

Re: Does Flink support Hadoop (HDFS) 2.9 ?

2018-03-01 Thread Piotr Nowojski
Hi, You can build Flink against Hadoop 2.9: https://issues.apache.org/jira/browse/FLINK-8177 <https://issues.apache.org/jira/browse/FLINK-8177> It seems like convenience binaries will be built by us only since 1.5: https://issues.apache.org/jira/browse/FLINK-8363 <https://issues.a

Re: Getting warning messages (...hdfs.DataStreamer - caught exception) while running Flink with Hadoop as the state backend

2018-03-01 Thread PedroMrChaves
No. It is just a log message with no apparent side effects. - Best Regards, Pedro Chaves -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/

Re: Does Flink support Hadoop (HDFS) 2.9 ?

2018-03-01 Thread Soheil Pourbafrani
I mean Flink 1.4 On Thursday, March 1, 2018, Soheil Pourbafrani wrote: > ?

Re: Getting warning messages (...hdfs.DataStreamer - caught exception) while running Flink with Hadoop as the state backend

2018-03-01 Thread Stephan Ewen
55) > But the job keeps running, apparently without issues./ > > Flink Version: 1.4.0 bundled with Hadoop 2.8 > > Hadoop version: 2.8.3 > > Any ideas on what might be the problem? > > > > - > Best Regards, > Pedro Chaves > -- > Sent from: http://apache-flink-user-mailing-list-archive.2336050. > n4.nabble.com/ >

Does Flink support Hadoop (HDFS) 2.9 ?

2018-03-01 Thread Soheil Pourbafrani
?

Getting warning messages (...hdfs.DataStreamer - caught exception) while running Flink with Hadoop as the state backend

2018-03-01 Thread PedroMrChaves
(DataStreamer.java:755) But the job keeps running, apparently without issues./ Flink Version: 1.4.0 bundled with Hadoop 2.8 Hadoop version: 2.8.3 Any ideas on what might be the problem? - Best Regards, Pedro Chaves -- Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4

Problems caused by use of hadoop classpath bash command

2018-01-24 Thread Ken Krugler
f.yaml, the change to put all jars returned by “hadoop classpath” on the classpath means that classes in these jars are found before the classes in my shaded Flink uber jar. If I ensure that I don’t have the “hadoop” command set up on my Bash path, then I don’t run into this issue. Does this make

Re: Hadoop compatibility and HBase bulk loading

2018-01-16 Thread Fabian Hueske
.it>: > Do you think is that complex to support it? I think we can try to > implement it if someone could give us some support (at least some big > picture) > > On Tue, Jan 16, 2018 at 10:02 AM, Fabian Hueske <fhue...@gmail.com> wrote: > >> No, I'm not aware of any

Re: Hadoop compatibility and HBase bulk loading

2018-01-16 Thread Flavio Pompermaier
Do you think is that complex to support it? I think we can try to implement it if someone could give us some support (at least some big picture) On Tue, Jan 16, 2018 at 10:02 AM, Fabian Hueske <fhue...@gmail.com> wrote: > No, I'm not aware of anybody working on extending the Hadoop comp

Re: Hadoop compatibility and HBase bulk loading

2018-01-16 Thread Fabian Hueske
No, I'm not aware of anybody working on extending the Hadoop compatibility support. I'll also have no time to work on this any time soon :-( 2018-01-13 1:34 GMT+01:00 Flavio Pompermaier <pomperma...@okkam.it>: > Any progress on this Fabian? HBase bulk loading is a common task for us &

Re: Hadoop compatibility and HBase bulk loading

2018-01-12 Thread Flavio Pompermaier
nd custom grouping / >>> sorting functions for Combiners are missing if I remember correctly). >>> I don't think that anybody is working on that right now, but it would >>> definitely be a cool feature. >>> >>> 2015-04-10 11:55 GMT+02:00 Flavio Pompermaier <p

Re: hadoop-free hdfs config

2018-01-11 Thread Till Rohrmann
ver FsStateBackend: > > 01/11/2018 09:27:22 Job execution switched to status FAILING. > org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not > find a file system implementation for scheme 'hdfs'. The scheme is not > directly supported by Flink and no Hadoop file syst

Re: hadoop-free hdfs config

2018-01-11 Thread Oleksandr Baliev
implementation for scheme 'hdfs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded. at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:405) at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:320

Re: hadoop-free hdfs config

2018-01-10 Thread Till Rohrmann
Hi Sasha, you're right that if you want to access HDFS from the user code only it should be possible to use the Hadoop free Flink version and bundle the Hadoop dependencies with your user code. However, if you want to use Flink's file system state backend as you did, then you have to start

hadoop-free hdfs config

2018-01-09 Thread Oleksandr Baliev
Hello guys, want to clarify for myself: since flink 1.4.0 allows to use hadoop-free distribution and dynamic hadoop dependencies loading, I suppose that if to download hadoop-free distribution, start cluster without any hadoop and then load any job's jar which has some hadoop dependencies (i used

Re: hadoop error with flink mesos on startup

2017-12-12 Thread Eron Wright
Thanks for investigating this, Jared. I would summarize it as Flink-on-Mesos cannot be used in Hadoop-free mode in Flink 1.4.0. I filed an improvement bug to support this scenario: FLINK-8247 On Tue, Dec 12, 2017 at 11:46 AM, Jared Stehler < jared.steh...@intellifylearning.com> wrote

Re: hadoop error with flink mesos on startup

2017-12-12 Thread Jared Stehler
- Intellify Learning o: 617.701.6330 x703 > On Dec 12, 2017, at 2:10 PM, Chesnay Schepler <ches...@apache.org> wrote: > > Could you look into the flink-shaded-hadoop jar to check whether the missing > class is actually contained? > > Where did the flink-shaded-hadoop j

Re: hadoop error with flink mesos on startup

2017-12-12 Thread Jared Stehler
${flink.version} -- Jared Stehler Chief Architect - Intellify Learning o: 617.701.6330 x703 > On Dec 12, 2017, at 2:10 PM, Chesnay Schepler <ches...@apache.org> wrote: > > Could you look into the flink-shaded-hadoop jar to check whether the missing > class is actually contain

Re: hadoop error with flink mesos on startup

2017-12-12 Thread Chesnay Schepler
Could you look into the flink-shaded-hadoop jar to check whether the missing class is actually contained? Where did the flink-shaded-hadoop jar come from? I'm asking because when building flink-dist from source the jar is called flink-shaded-hadoop2-uber-1.4.0.jar, which does indeed contain

hadoop error with flink mesos on startup

2017-12-12 Thread Jared Stehler
After upgrading to flink 1.4.0 using the hadoop-free build option, I’m seeing the following error on startup in the app master: 2017-12-12 18:23:15.473 [main] ERROR o.a.f.m.r.clusterframework.MesosApplicationMasterRunner - Mesos JobManager initialization failed <https://internal.d

Re: Are there plans to support Hadoop 2.9.0 on near future?

2017-11-29 Thread Kostas Kloudas
Kloudas <k.klou...@data-artisans.com > <mailto:k.klou...@data-artisans.com>> > Enviat el: dimecres, 29 de novembre de 2017 11:15:16 > Per a: ORIOL LOPEZ SANCHEZ > A/c: user@flink.apache.org <mailto:user@flink.apache.org> > Tema: Re: Are there plans to support

Re: Are there plans to support Hadoop 2.9.0 on near future?

2017-11-29 Thread ORIOL LOPEZ SANCHEZ
a lot. De: Kostas Kloudas <k.klou...@data-artisans.com> Enviat el: dimecres, 29 de novembre de 2017 11:15:16 Per a: ORIOL LOPEZ SANCHEZ A/c: user@flink.apache.org Tema: Re: Are there plans to support Hadoop 2.9.0 on near future? Hi Oriol, As you may have see

Re: Are there plans to support Hadoop 2.9.0 on near future?

2017-11-29 Thread Kostas Kloudas
Hi Oriol, As you may have seen form the mailing list we are currently in the process of releasing Flink 1.4. This is going to be a hadoop-free distribution which means that it should work with any hadoop version, including Hadoop 2.9.0. Given this, I would recommend to try out the release

Execution Failed (cluster setup Flink+Hadoop), Task Manager was lost/killed

2017-10-19 Thread Oleksandra Levchenko
Hi, I am running Flink batch job on Standalone Cluster (16 nodes), on top of Hadoop. The chain looks like: DataSet1 = env.readTextFile (csv on hdfs) .map .flatMap .groupBy .reduce .map .writeAsCsv (DataSet 1) DataSet2 = env.readTextFile .map .flatMap env.readCsvFile (DataSet1) DataSet1

Re: hadoop

2017-08-16 Thread Ted Yu
Can you check the following config in yarn-site.xml ? yarn.resourcemanager.proxy-user-privileges.enabled (true) Cheers On Wed, Aug 16, 2017 at 4:48 PM, Raja.Aravapalli <raja.aravapa...@target.com > wrote: > > > Hi, > > > > I triggered an flink yarn-session

Re: hadoop

2017-08-16 Thread Will Du
Is the kerberos token expired without renewing? > On Aug 16, 2017, at 7:48 PM, Raja.Aravapalli <raja.aravapa...@target.com> > wrote: > > > Hi, > > I triggered an flink yarn-session on a running Hadoop cluster… and triggering > streaming application on that. &

hadoop

2017-08-16 Thread Raja . Aravapalli
Hi, I triggered an flink yarn-session on a running Hadoop cluster… and triggering streaming application on that. But, I see after few days of running without any issues, the flink application which is writing data to hdfs failing with below exception. Caused

Re: Using Hadoop 2.8.0 in Flink Project for S3A Path Style Access

2017-08-10 Thread Aljoscha Krettek
.8.0 and > it worked. It would be nice to have 2.8 binaries since Hadoop 2.8.1 is also > released > > Mustafa Akın > www.mustafaak.in <http://www.mustafaak.in/> > > On Wed, Aug 9, 2017 at 9:00 PM, Eron Wright <eronwri...@gmail.com > <mailto:eronwri...@gmai

Re: Using Hadoop 2.8.0 in Flink Project for S3A Path Style Access

2017-08-10 Thread Mustafa AKIN
Yes, it would probably work. I cloned master repo and compiled with 2.8.0 and it worked. It would be nice to have 2.8 binaries since Hadoop 2.8.1 is also released Mustafa Akın www.mustafaak.in On Wed, Aug 9, 2017 at 9:00 PM, Eron Wright <eronwri...@gmail.com> wrote: > For reference: [F

Re: Using Hadoop 2.8.0 in Flink Project for S3A Path Style Access

2017-08-09 Thread Eron Wright
For reference: [FLINK-6466] Build Hadoop 2.8.0 convenience binaries On Wed, Aug 9, 2017 at 6:41 AM, Aljoscha Krettek <aljos...@apache.org> wrote: > So you're saying that this works if you manually compile Flink for Hadoop > 2.8.0? If yes, I think the solution is that we have to prov

Re: Using Hadoop 2.8.0 in Flink Project for S3A Path Style Access

2017-08-09 Thread Aljoscha Krettek
So you're saying that this works if you manually compile Flink for Hadoop 2.8.0? If yes, I think the solution is that we have to provide binaries for Hadoop 2.8.0. If we did that with a possible Flink 1.3.3 release and starting from Flink 1.4.0, would this be an option for you? Best, Aljoscha

Using Hadoop 2.8.0 in Flink Project for S3A Path Style Access

2017-07-11 Thread Mustafa AKIN
Hi all, I am trying to use S3 backend with custom endpoint. However, it is not supported in hadoop-aws@2.7.3, I need to use at least 2.8.0 version. The underyling reason is that the requests are being sent as following DEBUG [main] (AmazonHttpClient.java:337) - Sending Request: HEAD http

flink-1.2.0 java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/filter/Filter

2017-04-01 Thread rimin515
/CDH/lib/hbase/lib/guava-12.0.1.jar \/home/.../text-assembly-0.1.0.jar hdfs:///user/hadoop/wenhao/xj/wenda_search_aggregate_page.txt and have a error:Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.filter.Filter but use flink-1.1.1 it can run success,some one can tell me

Re: Hadoop 2.7.3

2017-02-10 Thread Dean Wampler
I don't have it any more, unfortunately. To be clear, I don't think it was Flink related, but a collision between a Hadoop security library calling into a Google Guava library, where a method was missing on CacheBuilder in the latter. Also, to add to the irritation, it only happened in my OSX

<    1   2   3   4   >