Thanks for replying. Yes it is a single node hbase cluster. I am not specifying any storage policy. Looking at the HStore <https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java#L274> code it appears even if no storage policy is specified it will take HOT.
Can you explain this a bit more how can I get around this error or in a single node hbase cluster I should be ignoring this? On Tue, Jul 21, 2020 at 3:03 PM zheng wang <18031...@qq.com> wrote: > LocalFileSystem? The setStoragePolicy could only be used in > distributed hdfs. > > > > ------------------ 原始邮件 ------------------ > 发件人: > "user" > < > subharaj.ma...@gmail.com>; > 发送时间: 2020年7月21日(星期二) 下午5:58 > 收件人: "Hbase-User"<user@hbase.apache.org>; > > 主题: HBase 2.1.0 - NoSuchMethodException > org.apache.hadoop.fs.LocalFileSystem.setStoragePolicy > > > > Hi > > I am using HBase 2.1.0 with Hadoop 3.0.0. In hbase master logs I am > seeing a warning like below > > 2020-07-20 06:02:24,859 WARN [StoreOpener-1588230740-1] > util.CommonFSUtils: FileSystem doesn't support setStoragePolicy; > HDFS-6584, HDFS-9345 not available. This is normal and expected on > earlier Hadoop versions. > java.lang.NoSuchMethodException: > > org.apache.hadoop.fs.LocalFileSystem.setStoragePolicy(org.apache.hadoop.fs.Path, > java.lang.String) > at > java.lang.Class.getDeclaredMethod(Class.java:2130) > at > org.apache.hadoop.hbase.util.CommonFSUtils.invokeSetStoragePolicy(CommonFSUtils.java:577) > at > org.apache.hadoop.hbase.util.CommonFSUtils.setStoragePolicy(CommonFSUtils.java:558) > at > org.apache.hadoop.hbase.util.CommonFSUtils.setStoragePolicy(CommonFSUtils.java:526) > at > org.apache.hadoop.hbase.regionserver.HRegionFileSystem.setStoragePolicy(HRegionFileSystem.java:194) > at > org.apache.hadoop.hbase.regionserver.HStore.<init>(HStore.java:264) > at > org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:5731) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1059) > at > org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:1056) > at > java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at > java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at > java.lang.Thread.run(Thread.java:748) > > As per my understanding, this error should not be coming with Hadoop > 3.0.0. Can someone let me know if my understanding is correct or what > could be going wrong here?