Hi, I'm using
STORE filtered INTO 'hbase://my_table' USING org.apache.pig.backend.hadoop.hbase.HBaseStorage('d:sk'); Everything works fine Success! Job Stats (time in seconds): JobId Maps Reduces MaxMapTime MinMapTIme AvgMapTime MedianMapTime MaxReduceTime MinReduceTime AvgReduceTime MedianReducetime Alias Feature Outputs job_201406182224_92888 1 0 3 3 3 3 0 0 0 0 Input(s): Successfully read 266 records (26557 bytes) from: "hdfs://nameservice1/masterdata/dict/my_data/archive/2014/08/10" Output(s): Successfully stored 266 records in: "hbase://my_table" Counters: Total records written : 266 Total bytes written : 0 Spillable Memory Manager spill count : 0 Total bags proactively spilled: 0 Total records proactively spilled: 0 Job DAG: job_201406182224_92888 The problem is here: Details at logfile: /mapred/tt02/taskTracker/oozie/jobcache/job_201406182224_92887/attempt_201406182224_92887_m_000000_0/work/pig-job_201406182224_92887.log Pig logfile dump: Pig Stack Trace --------------- ERROR 2043: Unexpected error during execution. org.apache.pig.backend.executionengine.ExecException: ERROR 2043: Unexpected error during execution. at org.apache.pig.PigServer.launchPlan(PigServer.java:1277) at org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1251) at org.apache.pig.PigServer.execute(PigServer.java:1241) at org.apache.pig.PigServer.executeBatch(PigServer.java:335) at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:137) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:198) at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:170) at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84) at org.apache.pig.Main.run(Main.java:475) at org.apache.pig.PigRunner.run(PigRunner.java:49) at org.apache.oozie.action.hadoop.PigMain.runPigJob(PigMain.java:283) at org.apache.oozie.action.hadoop.PigMain.run(PigMain.java:223) at org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:37) at org.apache.oozie.action.hadoop.PigMain.main(PigMain.java:76) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:495) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332) at org.apache.hadoop.mapred.Child$4.run(Child.java:268) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:396) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1438) at org.apache.hadoop.mapred.Child.main(Child.java:262) Caused by: java.io.IOException: No FileSystem for scheme: hbase at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2298) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2305) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2344) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2326) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:353) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:194) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.createSuccessFile(MapReduceLauncher.java:649) at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:433) at org.apache.pig.PigServer.launchPlan(PigServer.java:1266) ... 26 more ================================================================================ Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.PigMain], exit code [2] It causes oozie workflow to fail. Is there any workaround for this? Right now I have "false positive" alerts about failures. Really, data is load, but formally workflow is failed because of this strage stuff.