Append.

I met this exception while I run mr job, where /data/program/hbase-1.1.3 is HBASE_HOME.

2016-04-21 20:14:29,958 ERROR [main] index.IndexTool: An exception occured while performing the indexing job : java.io.FileNotFoundException: File does not exist: hdfs://ip:port/data/program/hbase-1.1.3/lib/metrics-core-2.2.0.jar at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1072) at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064) at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:288) at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.getFileStatus(ClientDistributedCacheManager.java:224) at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestamps(ClientDistributedCacheManager.java:93) at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:57) at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:265) at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:301) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:389)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
        at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
at org.apache.phoenix.mapreduce.index.IndexTool.configureRunnableJobUsingBulkLoad(IndexTool.java:287) at org.apache.phoenix.mapreduce.index.IndexTool.run(IndexTool.java:250)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.phoenix.mapreduce.index.IndexTool.main(IndexTool.java:378)




在 2016年04月21日 17:21, 金砖 写道:
hi, I read from document that index can be created asynchronously.

After create index with ASYNC keyword, then kick off a MapReduce Job to pupulate index.

${HBASE_HOME}/bin/hbase org.apache.phoenix.mapreduce.index.IndexTool
   --schema MY_SCHEMA --data-table MY_TABLE --index-table ASYNC_IDX
   --output-path ASYNC_IDX_HFILES

My question:
Can value of --output-path (as is ASYNC_IDX_HFILES)  be wherever hdfs ?
OR must it match or must not match where hbase table path in hdfs ?

Reply via email to