Lorand,
   Isn't fetch optimization supposed to be only for DUMP and not STORE ?

-Rohini

On Tue, Oct 14, 2014 at 6:47 PM, lulynn_2008 <lulynn_2...@163.com> wrote:

> Hi Lorand,
> The query run fine is I disable fetch. Thanks for your help. Could you
> tell why we need to disable fetch?
> BTW, I was using pig-0.13.0 and hive-0.13.1.
>
>
>
>
>
>
>
>
>
> At 2014-10-14 21:03:13, "Lorand Bendig" <lben...@gmail.com> wrote:
> >Hi,
> >
> >Which Pig and Hive versions do you use? Does the query run fine if you
> >disable fetch (set opt.fetch false) ?
> >
> >Thanks,
> >Lorand
> >
> >On 14/10/14 10:50, lulynn_2008 wrote:
> >> Hi All,
> >> I was running HCatStore and HCatLoader in pig grunt. But encounter
> "ERROR 2088: Fetch failed. Couldn't retrieve result". Please help give a
> glance and give your suggestions. Thanks.
> >>
> >> Test case:
> >> 1. Create table in hive:
> >> create table junit_unparted_basic(a int, b string) stored as RCFILE
> tblproperties('hcat.isd'='org.apache.hive.hcatalog.rcfile.RCFileInputDriver','hcat.osd'='org.apache.hive.hcatalog.rcfile.RCFileOutputDriver');
> >> 2. copy basic.input.data file into hdfs, here is the content in file:
> >> 1    S1S
> >> 1    S2S
> >> 1    S3S
> >> 2    S1S
> >> 2    S2S
> >> 2    S3S
> >> 3    S1S
> >> 3    S2S
> >> 3    S3S
> >>
> >> 3. run Pig: pig -useHCatalog
> >> 4. grunt> A = load 'basic.input.data' as (a:int, b:chararray);
> >> 5. grunt> store A into 'junit_unparted_basic' using
> org.apache.hive.hcatalog.pig.HCatStorer();
> >> 6. X = load 'junit_unparted_basic' using
> org.apache.hive.hcatalog.pig.HCatLoader();
> >>
> >> 7. grunt> dump X
> >>
> >> Error Log:
> >>
> ================================================================================
> >> Pig Stack Trace
> >> ---------------
> >> ERROR 2088: Fetch failed. Couldn't retrieve result
> >>
> >> org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable
> to open iterator for alias X
> >>          at org.apache.pig.PigServer.openIterator(PigServer.java:912)
> >>          at
> org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:752)
> >>          at
> org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:372)
> >>          at
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:228)
> >>          at
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:203)
> >>          at org.apache.pig.tools.grunt.Grunt.run(Grunt.java:66)
> >>          at org.apache.pig.Main.run(Main.java:542)
> >>          at org.apache.pig.Main.main(Main.java:156)
> >>          at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>          at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
> >>          at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
> >>          at java.lang.reflect.Method.invoke(Method.java:619)
> >>          at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
> >> Caused by: org.apache.pig.PigException: ERROR 1002: Unable to store
> alias X
> >>          at org.apache.pig.PigServer.storeEx(PigServer.java:1015)
> >>          at org.apache.pig.PigServer.store(PigServer.java:974)
> >>          at org.apache.pig.PigServer.openIterator(PigServer.java:887)
> >>          ... 12 more
> >> Caused by: org.apache.pig.backend.executionengine.ExecException: ERROR
> 2088: Fetch failed. Couldn't retrieve result
> >>          at
> org.apache.pig.backend.hadoop.executionengine.fetch.FetchLauncher.runPipeline(FetchLauncher.java:180)
> >>          at
> org.apache.pig.backend.hadoop.executionengine.fetch.FetchLauncher.launchPig(FetchLauncher.java:81)
> >>          at
> org.apache.pig.backend.hadoop.executionengine.HExecutionEngine.launchPig(HExecutionEngine.java:275)
> >>          at org.apache.pig.PigServer.launchPlan(PigServer.java:1367)
> >>          at
> org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1352)
> >>          at org.apache.pig.PigServer.storeEx(PigServer.java:1011)
> >>          ... 14 more
> >>
> ================================================================================
> >>
> >
>

Reply via email to