Hi,Enzo wang
图片无法加载,github地址也无法访问,你可以试一下hive可以正常读写表么













--

Best,
wldd




在 2020-05-26 17:01:32,"Enzo wang" <sre.enzow...@gmail.com> 写道:

Hi Wldd,


谢谢回复。


1.  datanode 是可用的


❯ docker-compose exec namenode hadoop fs -ls /tmp
Found 1 items
drwx-wx-wx   - root supergroup          0 2020-05-26 05:40 /tmp/hive


namenode 的webui 也可以看到:




2.  设置set execution.type=batch; 以后,执行报错,错误如下
Causedby: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File 
/user/hive/warehouse/pokes/.staging_1590483224500/cp-0/task-0/cp-0-task-0-file-0
 could only be replicated to 0 nodes instead of minReplication (=1). There are 
1 datanode(s) running and1 node(s) are excluded inthis operation.


完整错误见:
https://gist.github.com/r0c/f95ec650fec0a16055787ac0d63f4673



On Tue, 26 May 2020 at 16:52, wldd <wldd1...@163.com> wrote:

问题1:

org.apache.hadoop.hdfs.BlockMissingException,可以用hadoop fs 命令看看那个datanode能不能访问


问题2:
写hive,需要用batch模式,set execution.type=batch;







在 2020-05-26 16:42:12,"Enzo wang" <sre.enzow...@gmail.com> 写道:

Hi Flink group,


今天再看Flink与Hive集成的部分遇到了几个问题,麻烦大家帮忙看看。
参考的网址:https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/hive/hive_catalog.html


版本、表结构信息见这里: https://gist.github.com/r0c/e244622d66447dfc85a512e75fc2159b


问题1:Flink SQL 读Hive 表pokes 失败


Flink SQL> select * from pokes;
2020-05-26 16:12:11,439 INFO  org.apache.hadoop.mapred.FileInputFormat          
            - Total input paths to process : 4
[ERROR] Could not execute SQL statement. Reason:
org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: 
BP-138389591-172.20.0.4-1590471581703:blk_1073741825_1001 
file=/user/hive/warehouse/pokes/kv1.txt







问题2:Flink SQL 写Hive 表pokes 失败


Flink SQL> insert into pokes select 12,'tom';
[INFO] Submitting SQL update statement to the cluster...
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.TableException: Stream Tables can only be emitted by 
AppendStreamTableSink, RetractStreamTableSink, or UpsertStreamTableSink.







Cheers,
Enzo

回复