Hi,

As far as I remember hbase.fs.tmp.dir is a HBase server side configuration.
So you need to restart HBase service for that configuration property value
to take effect.

Regards,
Ashish

On Wed, Nov 21, 2018 at 7:20 PM 刘成军 <[email protected]> wrote:

> Hi all,
>     I have try the following method:
>     1. try to set "dfs.namenode.fs-limits.max-directory-items" in
> hdfs-site.xml
>         I restart the kylin service.(I cannot restart hdfs service
> because in use).
>         Result: failed with the same error:
>                 The directory item limit of /tmp is exceeded:limit=1028576
> items=1028576
>
>     2. try to add property hbase.fs.tmp.dir(=/wy/tmp/hbase) property to
> hbase-site.xml
>         I restart the kylin service.
>         Result: failed with the same error:
>                 The directory item limit of /tmp is exceeded:limit=1028576
> items=1028576
>
>     Any one has other suggestions?
>
>      PS:
>         Hadoop env:
>             CDH 5.13 with kerberos enabled(many nodes)
>         kylin env:
>             kylin 2.4.0 with hadoop client installed (not managed by CDH
> management service)
>
>
> ------------------------------------------------------------------
> 发件人:刘成军 <[email protected]>
> 发送时间:2018年11月20日(星期二) 23:25
> 收件人:user <[email protected]>; JiaTao Tao <[email protected]>
> 主 题:回复:Help for job build: directory item limit exceeded exception
>
> JiaTao:
>     Thx for your reply, i wil try it late.
>
> But i have check the source code, the code set with
>
>     Configuration conf = 
> HBaseConfiguration.create(HadoopUtil.getCurrentConfiguration());
>     ...
>     if (StringUtils.isBlank(conf.get("hbase.fs.tmp.dir"))) {
>             conf.set("hbase.fs.tmp.dir", "/tmp");
>     }
>
>     My question is i have set the hbase.fs.tmp.dir property in
> hbase-site.xml( and restart kylin), but it still write data to /tmp
> directory.
>
>     Any one has other suggestion?
>
> ------------------------------------------------------------------
> 发件人:JiaTao Tao <[email protected]>
> 发送时间:2018年11月20日(星期二) 22:59
> 收件人:user <[email protected]>; 刘成军 <[email protected]>
> 主 题:Re: Help for job build: directory item limit exceeded exception
>
> Hi
>
> Seems that there are too many files in "/tmp", try to modify the config
> below in "hdfs-site.xml".
>
> <property>
>   <name>dfs.namenode.fs-limits.max-directory-items</name>
>   <value>1048576</value>
>   <description>Defines the maximum number of items that a directory may
>       contain. Cannot set the property to a value less than 1 or more than
>       6400000.</description>
> </property>
>
>
> And here's a link for you:
> https://tw.saowen.com/a/fa6aea71141c6241f496093d9b0feb0c87bf4c30cf40b4ff6fdc065a8228231a
> .
> It is generally recommended that users do not tune these values except in
> very unusual circumstances.
>
> 刘成军 <[email protected]> 于2018年11月20日周二 上午11:01写道:
> Hi,
>     Build cube from my cdh(5.13 cluster, with kerberos enabled),  when the
> Job comes with the step(#10): Convert Cuboid Data to HFile,
> it comes the followed exception:
>
>
>  I also change the hbase config(hbase.fs.tmp.dir=/usr/tmp/hbase) in my
> hbase-site.xml, but it comes the same exception;
>  How can i do with it?
>
> PS:
>    I did not have the permission to delete the data in /tmp.
>
> Best Regards
>
> -----------------------------
>
> *刘成军(**Gavin**)*
>
> ————————————————
>
> 手机:13913036255
>
>
>
>
>
>
> --
>
>
> Regards!
>
> Aron Tao
>
>
>

Reply via email to