In your case, It seems like Oozie considered hbase as a file system and Its trying to create a success file. Hint for that behavior is: at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.createSuccessFile(MapReduceLauncher.java:688)
Are you trying to create "success" file via your workflow? If yes, can you try removing that step. On Fri, Aug 14, 2015 at 8:30 AM, Pariksheet Barapatre <[email protected]> wrote: > Thanks for info Anil. > > Yes, script runs finje without oozie. Pig job runs with successful status > but after that oozie launcher complains with error I mentioned above. What > kind of hack we can apply? This is the first time we are using Pig-Phoenix > integration. > > Thanks > Pari > > On 14 August 2015 at 20:41, anil gupta <[email protected]> wrote: > >> Hi Pari, >> >> AFAIK, Oozie does not supports HBase out of the box. Does the script runs >> fine without oozie? >> You will need to do some ugly hacks at your end to get this going. We >> dont use Pig with HBase so i wont be able to tell you exact solution. >> >> PS: At my workplace, we have had many challenges in integration of HBase >> and Avro with Oozie4.2. >> >> Thanks, >> Anil Gupta >> >> On Fri, Aug 14, 2015 at 7:24 AM, Pariksheet Barapatre < >> [email protected]> wrote: >> >>> Hi Ravi/All, >>> >>> When I use PhoenixPigStorage to load HBase and try to run job through >>> Oozie, I am getting below error - >>> Pig logfile dump: >>> >>> Pig Stack Trace >>> --------------- >>> ERROR 2043: Unexpected error during execution. >>> >>> org.apache.pig.backend.executionengine.ExecException: ERROR 2043: >>> Unexpected error during execution. >>> at org.apache.pig.PigServer.launchPlan(PigServer.java:1333) >>> at >>> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1307) >>> at org.apache.pig.PigServer.execute(PigServer.java:1297) >>> at org.apache.pig.PigServer.executeBatch(PigServer.java:375) >>> at org.apache.pig.PigServer.executeBatch(PigServer.java:353) >>> at >>> org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:140) >>> at >>> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:202) >>> at >>> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173) >>> at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84) >>> at org.apache.pig.Main.run(Main.java:478) >>> at org.apache.pig.PigRunner.run(PigRunner.java:49) >>> at org.apache.oozie.action.hadoop.PigMain.runPigJob(PigMain.java:286) >>> at org.apache.oozie.action.hadoop.PigMain.run(PigMain.java:226) >>> at >>> org.apache.oozie.action.hadoop.LauncherMain.run(LauncherMain.java:39) >>> at org.apache.oozie.action.hadoop.PigMain.main(PigMain.java:74) >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>> at >>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) >>> at >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >>> at java.lang.reflect.Method.invoke(Method.java:606) >>> at >>> org.apache.oozie.action.hadoop.LauncherMapper.map(LauncherMapper.java:227) >>> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50) >>> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417) >>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332) >>> at org.apache.hadoop.mapred.Child$4.run(Child.java:268) >>> at java.security.AccessController.doPrivileged(Native Method) >>> at javax.security.auth.Subject.doAs(Subject.java:415) >>> at >>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642) >>> at org.apache.hadoop.mapred.Child.main(Child.java:262) >>> Caused by: java.io.IOException: No FileSystem for scheme: hbase >>> at >>> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584) >>> at >>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591) >>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) >>> at >>> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630) >>> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) >>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) >>> at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) >>> at >>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.createSuccessFile(MapReduceLauncher.java:688) >>> at >>> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:461) >>> at org.apache.pig.PigServer.launchPlan(PigServer.java:1322) >>> ... 27 more >>> >>> >>> Table is getting loaded but at the end oozie job failing with above >>> error. Any workaround/known issues. >>> >>> Thanks for your help in advance. >>> >>> Regards >>> Pari >>> >>> >>> >>> On 12 August 2015 at 17:59, Pariksheet Barapatre <[email protected]> >>> wrote: >>> >>>> Many Thanks Ravi. >>>> >>>> Your solution worked. Let me get a JIRA. >>>> >>>> Thanks >>>> Pari >>>> >>>> On 10 August 2015 at 20:08, Ravi Kiran <[email protected]> >>>> wrote: >>>> >>>>> Hi Pari, >>>>> I wrote a quick test and there indeed seems to be an issue when >>>>> SALT BUCKETS are mentioned on the table. Can you please raise a JIRA >>>>> ticket. >>>>> >>>>> In the mean while, can you try the following to get over the issue. >>>>> >>>>> raw = LOAD 'hbase://*query/SELECT CLIENTID,EMPID,NAME FROM HIRES*' >>>>> USING org.apache.phoenix.pig.PhoenixHBaseLoader('localhost')'; >>>>> grpd = GROUP raw BY CLIENTID; >>>>> cnt = FOREACH grpd GENERATE group AS CLIENT,COUNT(raw); >>>>> DUMP cnt; >>>>> >>>>> Let me know how it goes. >>>>> >>>>> Regards >>>>> Ravi >>>>> >>>>> On Mon, Aug 10, 2015 at 1:21 AM, Pariksheet Barapatre < >>>>> [email protected]> wrote: >>>>> >>>>>> Here are the steps - >>>>>> >>>>>> -- Phoenix version - 4.2.2 >>>>>> -- CDH -- 5.3 >>>>>> >>>>>> -- SQLLINE >>>>>> CREATE TABLE HIRES_SALTED( CLIENTID INTEGER NOT NULL, EMPID INTEGER >>>>>> NOT NULL, NAME VARCHAR CONSTRAINT pk PRIMARY KEY(CLIENTID,EMPID)) >>>>>> SALT_BUCKETS=2; >>>>>> >>>>>> UPSERT INTO HIRES_SALTED VALUES(10,100,'ABC'); >>>>>> UPSERT INTO HIRES_SALTED VALUES(11,101,'XYZ'); >>>>>> >>>>>> -- PIG >>>>>> register '/opt/phoenix-4.2.2-bin/phoenix-pig-4.2.2.jar'; >>>>>> register '/opt/phoenix-4.2.2-bin/phoenix-4.2.2-client.jar'; >>>>>> >>>>>> raw = LOAD 'hbase://table/HIRES_SALTED' USING >>>>>> org.apache.phoenix.pig.PhoenixHBaseLoader('localhost'); >>>>>> dump raw >>>>>> >>>>>> Error - >>>>>> 2015-08-10 13:43:56,036 [main] ERROR org.apache.pig.tools.grunt.Grunt >>>>>> - ERROR 1200: null >>>>>> Details at logfile: /../pig_1439194425512.log >>>>>> >>>>>> Pig Stack Trace >>>>>> --------------- >>>>>> ERROR 1200: null >>>>>> >>>>>> >>>>>> Failed to parse: null >>>>>> at >>>>>> org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:198) >>>>>> at >>>>>> org.apache.pig.PigServer$Graph.validateQuery(PigServer.java:1648) >>>>>> at >>>>>> org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1621) >>>>>> at org.apache.pig.PigServer.registerQuery(PigServer.java:575) >>>>>> at >>>>>> org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:1093) >>>>>> at >>>>>> org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:501) >>>>>> at >>>>>> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:198) >>>>>> at >>>>>> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173) >>>>>> at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:69) >>>>>> at org.apache.pig.Main.run(Main.java:541) >>>>>> at org.apache.pig.Main.main(Main.java:156) >>>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>>>>> at >>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) >>>>>> at >>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >>>>>> at java.lang.reflect.Method.invoke(Method.java:606) >>>>>> at org.apache.hadoop.util.RunJar.main(RunJar.java:212) >>>>>> Caused by: java.lang.NullPointerException >>>>>> at >>>>>> org.apache.phoenix.pig.util.PhoenixPigSchemaUtil.getResourceSchema(PhoenixPigSchemaUtil.java:67) >>>>>> at >>>>>> org.apache.phoenix.pig.PhoenixHBaseLoader.getSchema(PhoenixHBaseLoader.java:220) >>>>>> at >>>>>> org.apache.pig.newplan.logical.relational.LOLoad.getSchemaFromMetaData(LOLoad.java:175) >>>>>> at >>>>>> org.apache.pig.newplan.logical.relational.LOLoad.<init>(LOLoad.java:89) >>>>>> at >>>>>> org.apache.pig.parser.LogicalPlanBuilder.buildLoadOp(LogicalPlanBuilder.java:853) >>>>>> at >>>>>> org.apache.pig.parser.LogicalPlanGenerator.load_clause(LogicalPlanGenerator.java:3568) >>>>>> at >>>>>> org.apache.pig.parser.LogicalPlanGenerator.op_clause(LogicalPlanGenerator.java:1625) >>>>>> at >>>>>> org.apache.pig.parser.LogicalPlanGenerator.general_statement(LogicalPlanGenerator.java:1102) >>>>>> at >>>>>> org.apache.pig.parser.LogicalPlanGenerator.statement(LogicalPlanGenerator.java:560) >>>>>> at >>>>>> org.apache.pig.parser.LogicalPlanGenerator.query(LogicalPlanGenerator.java:421) >>>>>> at >>>>>> org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:188) >>>>>> ... 15 more >>>>>> >>>>>> ================================================================================ >>>>>> >>>>>> If we create table with SALT everything works without any issue. >>>>>> >>>>>> Can anyone please help ? >>>>>> >>>>>> >>>>>> Many Thanks >>>>>> Pari >>>>>> >>>>>> On 10 August 2015 at 13:12, Pariksheet Barapatre < >>>>>> [email protected]> wrote: >>>>>> >>>>>>> Hi Russell, >>>>>>> >>>>>>> below error I am getting in pig.log >>>>>>> Failed to parse: null >>>>>>> at >>>>>>> org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:198) >>>>>>> at >>>>>>> org.apache.pig.PigServer$Graph.validateQuery(PigServer.java:1648) >>>>>>> at >>>>>>> org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1621) >>>>>>> at org.apache.pig.PigServer.registerQuery(PigServer.java:575) >>>>>>> at >>>>>>> org.apache.pig.tools.grunt.GruntParser.processPig(GruntParser.java:1093) >>>>>>> at >>>>>>> org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:501) >>>>>>> at >>>>>>> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:198) >>>>>>> at >>>>>>> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173) >>>>>>> at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:69) >>>>>>> at org.apache.pig.Main.run(Main.java:541) >>>>>>> at org.apache.pig.Main.main(Main.java:156) >>>>>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) >>>>>>> at >>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) >>>>>>> at >>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >>>>>>> at java.lang.reflect.Method.invoke(Method.java:606) >>>>>>> at org.apache.hadoop.util.RunJar.main(RunJar.java:212) >>>>>>> Caused by: java.lang.NullPointerException >>>>>>> at >>>>>>> org.apache.phoenix.pig.util.PhoenixPigSchemaUtil.getResourceSchema(PhoenixPigSchemaUtil.java:67) >>>>>>> at >>>>>>> org.apache.phoenix.pig.PhoenixHBaseLoader.getSchema(PhoenixHBaseLoader.java:220) >>>>>>> at >>>>>>> org.apache.pig.newplan.logical.relational.LOLoad.getSchemaFromMetaData(LOLoad.java:175) >>>>>>> at >>>>>>> org.apache.pig.newplan.logical.relational.LOLoad.<init>(LOLoad.java:89) >>>>>>> at >>>>>>> org.apache.pig.parser.LogicalPlanBuilder.buildLoadOp(LogicalPlanBuilder.java:853) >>>>>>> at >>>>>>> org.apache.pig.parser.LogicalPlanGenerator.load_clause(LogicalPlanGenerator.java:3568) >>>>>>> at >>>>>>> org.apache.pig.parser.LogicalPlanGenerator.op_clause(LogicalPlanGenerator.java:1625) >>>>>>> at >>>>>>> org.apache.pig.parser.LogicalPlanGenerator.general_statement(LogicalPlanGenerator.java:1102) >>>>>>> at >>>>>>> org.apache.pig.parser.LogicalPlanGenerator.statement(LogicalPlanGenerator.java:560) >>>>>>> at >>>>>>> org.apache.pig.parser.LogicalPlanGenerator.query(LogicalPlanGenerator.java:421) >>>>>>> at >>>>>>> org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:188) >>>>>>> ... 15 more >>>>>>> >>>>>>> I am registering below two jars - >>>>>>> register '/home/yumecorp/phoenix-4.2.0-bin/phoenix-pig-4.2.0.jar'; >>>>>>> register '/home/yumecorp/phoenix-4.2.0-bin/phoenix-4.2.0-client.jar'; >>>>>>> >>>>>>> Thanks >>>>>>> Pari >>>>>>> >>>>>>> On 10 August 2015 at 12:50, Russell Jurney <[email protected] >>>>>>> > wrote: >>>>>>> >>>>>>>> There shouldn't be a limitation, no. I've stored to/read from >>>>>>>> Phoenix tables from Pig on CDH 5. >>>>>>>> >>>>>>>> Can you paste the error? >>>>>>>> >>>>>>>> >>>>>>>> On Monday, August 10, 2015, Pariksheet Barapatre < >>>>>>>> [email protected]> wrote: >>>>>>>> >>>>>>>>> Hi All, >>>>>>>>> >>>>>>>>> I am trying to run Pig script on Phoenix table. >>>>>>>>> >>>>>>>>> I am using same example given in documentation. >>>>>>>>> >>>>>>>>> CREATE TABLE HIRES( CLIENTID INTEGER NOT NULL, EMPID INTEGER NOT >>>>>>>>> NULL, NAME VARCHAR CONSTRAINT pk PRIMARY KEY(CLIENTID,EMPID)); >>>>>>>>> >>>>>>>>> >>>>>>>>> raw = LOAD 'hbase://table/HIRES USING >>>>>>>>> org.apache.phoenix.pig.PhoenixHBaseLoader('localhost')'; >>>>>>>>> grpd = GROUP raw BY CLIENTID; >>>>>>>>> cnt = FOREACH grpd GENERATE group AS CLIENT,COUNT(raw); >>>>>>>>> DUMP cnt; >>>>>>>>> >>>>>>>>> The code above works without any issue, but if I CREATE HIRES >>>>>>>>> tables with SALTs script failed with QueryParse.Error. >>>>>>>>> >>>>>>>>> Is there any limitation of Pig integration with SALTED table. >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> Cheers, >>>>>>>>> Pari >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Russell Jurney twitter.com/rjurney [email protected] >>>>>>>> datasyndrome.com >>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Cheers, >>>>>>> Pari >>>>>>> >>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Cheers, >>>>>> Pari >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> Cheers, >>>> Pari >>>> >>> >>> >>> >>> -- >>> Cheers, >>> Pari >>> >> >> >> >> -- >> Thanks & Regards, >> Anil Gupta >> > > > > -- > Cheers, > Pari > -- Thanks & Regards, Anil Gupta
