Re: 【求助】Flink Hadoop依赖问题

2020-07-16 文章 Yang Wang
你可以在Pod里面确认一下/data目录是否正常挂载,另外需要在Pod里ps看一下
起的JVM进程里的classpath是什么,有没有包括hadoop的jar


当然,使用Roc Marshal建议的增加flink-shaded-hadoop并且放到$FLINK_HOME/lib下也可以解决问题

Best,
Yang

Roc Marshal  于2020年7月15日周三 下午5:09写道:

>
>
>
> 你好,Z-Z,
>
> 可以尝试在
> https://repo1.maven.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/
> 下载对应的uber jar包,并就将下载后的jar文件放到flink镜像的 ${FLINK_HOME}/lib 路径下,之后启动编排的容器。
> 祝好。
> Roc Marshal.
>
>
>
>
>
>
>
>
>
>
>
> 在 2020-07-15 10:47:39,"Z-Z"  写道:
> >我在使用Flink 1.11.0版本中,使用docker-compose搭建,docker-compose文件如下:
> >version: "2.1"
> >services:
> > jobmanager:
> >  image: flink:1.11.0-scala_2.12
> >  expose:
> >   - "6123"
> >  ports:
> >   - "8081:8081"
> >  command: jobmanager
> >  environment:
> >   - JOB_MANAGER_RPC_ADDRESS=jobmanager
> >   -
> HADOOP_CLASSPATH=/data/hadoop-2.9.2/etc/hadoop:/data/hadoop-2.9.2/share/hadoop/common/lib/*:/data/hadoop-2.9.2/share/hadoop/common/*:/data/hadoop-2.9.2/share/hadoop/hdfs:/data/hadoop-2.9.2/share/hadoop/hdfs/lib/*:/data/hadoop-2.9.2/share/hadoop/hdfs/*:/data/hadoop-2.9.2/share/hadoop/yarn:/data/hadoop-2.9.2/share/hadoop/yarn/lib/*:/data/hadoop-2.9.2/share/hadoop/yarn/*:/data/hadoop-2.9.2/share/hadoop/mapreduce/lib/*:/data/hadoop-2.9.2/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar
> >  volumes:
> >   - ./jobmanager/conf:/opt/flink/conf
> >   - ./data:/data
> >
> >
> > taskmanager:
> >  image: flink:1.11.0-scala_2.12
> >  expose:
> >   - "6121"
> >   - "6122"
> >  depends_on:
> >   - jobmanager
> >  command: taskmanager
> >  links:
> >   - "jobmanager:jobmanager"
> >  environment:
> >   - JOB_MANAGER_RPC_ADDRESS=jobmanager
> >  volumes:
> >   - ./taskmanager/conf:/opt/flink/conf
> >networks:
> > default:
> >  external:
> >   name: flink-network
> >
> >
> >
> >hadoop-2.9.2已经放在data目录了,且已经在jobmanager和taskmanager的环境变量里添加了HADOOP_CLASSPATH,但通过cli提交和webui提交,jobmanager还是提示报Could
> not find a file system implementation for scheme 'hdfs'。有谁知道是怎么回事吗?
>


Re:【求助】Flink Hadoop依赖问题

2020-07-15 文章 Roc Marshal



你好,Z-Z,

可以尝试在 
https://repo1.maven.org/maven2/org/apache/flink/flink-shaded-hadoop-2-uber/ 
下载对应的uber jar包,并就将下载后的jar文件放到flink镜像的 ${FLINK_HOME}/lib 路径下,之后启动编排的容器。
祝好。
Roc Marshal.











在 2020-07-15 10:47:39,"Z-Z"  写道:
>我在使用Flink 1.11.0版本中,使用docker-compose搭建,docker-compose文件如下:
>version: "2.1"
>services:
> jobmanager:
>  image: flink:1.11.0-scala_2.12
>  expose:
>   - "6123"
>  ports:
>   - "8081:8081"
>  command: jobmanager
>  environment:
>   - JOB_MANAGER_RPC_ADDRESS=jobmanager
>   - 
>HADOOP_CLASSPATH=/data/hadoop-2.9.2/etc/hadoop:/data/hadoop-2.9.2/share/hadoop/common/lib/*:/data/hadoop-2.9.2/share/hadoop/common/*:/data/hadoop-2.9.2/share/hadoop/hdfs:/data/hadoop-2.9.2/share/hadoop/hdfs/lib/*:/data/hadoop-2.9.2/share/hadoop/hdfs/*:/data/hadoop-2.9.2/share/hadoop/yarn:/data/hadoop-2.9.2/share/hadoop/yarn/lib/*:/data/hadoop-2.9.2/share/hadoop/yarn/*:/data/hadoop-2.9.2/share/hadoop/mapreduce/lib/*:/data/hadoop-2.9.2/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar
>  volumes:
>   - ./jobmanager/conf:/opt/flink/conf
>   - ./data:/data
>
>
> taskmanager:
>  image: flink:1.11.0-scala_2.12
>  expose:
>   - "6121"
>   - "6122"
>  depends_on:
>   - jobmanager
>  command: taskmanager
>  links:
>   - "jobmanager:jobmanager"
>  environment:
>   - JOB_MANAGER_RPC_ADDRESS=jobmanager
>  volumes:
>   - ./taskmanager/conf:/opt/flink/conf
>networks:
> default:
>  external:
>   name: flink-network
>
>
>
>hadoop-2.9.2已经放在data目录了,且已经在jobmanager和taskmanager的环境变量里添加了HADOOP_CLASSPATH,但通过cli提交和webui提交,jobmanager还是提示报Could
> not find a file system implementation for scheme 'hdfs'。有谁知道是怎么回事吗?


????????Flink Hadoop????????

2020-07-14 文章 Z-Z
Flink 1.11.0docker-compose??docker-compose??
version: "2.1"
services:
 jobmanager:
  image: flink:1.11.0-scala_2.12
  expose:
   - "6123"
  ports:
   - "8081:8081"
  command: jobmanager
  environment:
   - JOB_MANAGER_RPC_ADDRESS=jobmanager
   - 
HADOOP_CLASSPATH=/data/hadoop-2.9.2/etc/hadoop:/data/hadoop-2.9.2/share/hadoop/common/lib/*:/data/hadoop-2.9.2/share/hadoop/common/*:/data/hadoop-2.9.2/share/hadoop/hdfs:/data/hadoop-2.9.2/share/hadoop/hdfs/lib/*:/data/hadoop-2.9.2/share/hadoop/hdfs/*:/data/hadoop-2.9.2/share/hadoop/yarn:/data/hadoop-2.9.2/share/hadoop/yarn/lib/*:/data/hadoop-2.9.2/share/hadoop/yarn/*:/data/hadoop-2.9.2/share/hadoop/mapreduce/lib/*:/data/hadoop-2.9.2/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar
  volumes:
   - ./jobmanager/conf:/opt/flink/conf
   - ./data:/data


 taskmanager:
  image: flink:1.11.0-scala_2.12
  expose:
   - "6121"
   - "6122"
  depends_on:
   - jobmanager
  command: taskmanager
  links:
   - "jobmanager:jobmanager"
  environment:
   - JOB_MANAGER_RPC_ADDRESS=jobmanager
  volumes:
   - ./taskmanager/conf:/opt/flink/conf
networks:
 default:
  external:
   name: flink-network



hadoop-2.9.2datajobmanager??taskmanager??HADOOP_CLASSPATHcli??webui??jobmanager??Could
 not find a file system implementation for scheme 'hdfs'

Re: Flink Hadoop依赖

2020-07-08 文章 Xintong Song
你说的 “jobmanager的lib文件夹” 是指哪里?Flink 的部署方式是怎样的?CLI 运行在哪里?

Thank you~

Xintong Song



On Wed, Jul 8, 2020 at 10:59 AM Z-Z  wrote:

> Hi, 各位大佬们,有个问题,Flink
> 1.10.0版本中,已经在jobmanager的lib文件夹添加了flink-shaded-hadoop-2-uber-2.7.5-10.0.jar文件,通过webui上传可以正常运行任务,但通过cli命令,提交任务后报Could
> not find a file system implementation for scheme 'hdfs'. The scheme is not
> directly supported by Flink and no Hadoop file system to support this
> scheme could be loaded.有谁知道是怎么回事吗?


Flink Hadoop????

2020-07-07 文章 Z-Z
Hi?? ??Flink 
1.10.0??jobmanager??libflink-shaded-hadoop-2-uber-2.7.5-10.0.jar??webuicli??Could
 not find a file system implementation for scheme 'hdfs'. The scheme is not 
directly supported by Flink and no Hadoop file system to support this scheme 
could be loaded.??

??????flink ????hadoop????????

2020-05-29 文章 ??????????????
??OK??




----
??:""<13162790...@163.com;
:2020??5??29??(??) 3:20
??:"user-zh"

?????? flink ????hadoop????????

2020-05-29 文章 ??????????????
thanks very much ??
resource hdfs-site.xml




----
??:"wangweigu...@stevegame.cn"

????: flink ????hadoop????????

2020-05-29 文章 wangweigu...@stevegame.cn
   cdh??hdfs8020?? 
hdfs://TestHACluster/user/flink/test   
hdfs://TestHACluster:8020/user/flink/test
   
flinkTestHAClusterNamespace??hdfsHA??hive-site.xml??hdfs-site.xml


 
 ??
?? 2020-05-29 15:06
 user-zh
?? flink hadoop
 ??
hadoopTestHACluster??apipath 
hdfs://TestHACluster/user/flink/test
??TestHACluster:8020?? 
??hiveTestHACluster:8020
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
DataStream

flink ????hadoop????????

2020-05-29 文章 ??????????????
 ??
hadoopTestHACluster??apipath 
hdfs://TestHACluster/user/flink/test
??TestHACluster:8020?? 
??hiveTestHACluster:8020
StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
DataStream