Hi Koji/Rohini/Bill, I tried the same script with Apache Pig version 0.10.1 (r1426677) compiled Dec 28 2012, 16:46:13. The script compiled without problems.
Increasing heap size didn't help with Pig-0.11. I have opened a jira for this issue (https://issues.apache.org/jira/browse/PIG-3455) Thanks, Shubham. On Fri, Sep 6, 2013 at 5:09 PM, Rohini Palaniswamy <rohini.adi...@gmail.com>wrote: > I think we should fix it in pig if it is a regression from pig 0.10. > > Shubam, > If the script works fine for you in pig 0.10, can you open a jira for > the issue with 0.11 ? > > Regards, > Rohini > > > On Fri, Sep 6, 2013 at 1:51 PM, Bill Graham <billgra...@gmail.com> wrote: > > > The getSignature method basically generates a string representation of > the > > logical plan and they computes it's hash. In your case it seems the > logical > > plan is too large for the amount of memory you have. Try increasing the > > heap even more. > > > > > > On Fri, Sep 6, 2013 at 1:10 PM, Koji Noguchi <knogu...@yahoo-inc.com> > > wrote: > > > > > Seems to be happening inside the method introduced in 0.11 > > > "org.apache.pig.newplan.logical.relational.LogicalPlan.getSignature" > > > > > > https://issues.apache.org/jira/browse/PIG-2587 > > > > > > Maybe a coincidence but can we ask Bill to help us? > > > > > > Shubham, can you try your query on pig 0.10.* and see if you don't hit > > the > > > OOM? > > > > > > Koji > > > > > > > > > On Sep 4, 2013, at 1:27 PM, Shubham Chopra wrote: > > > > > > > Hi, > > > > > > > > I have a relatively large pig scripts (around 1.5k lines, 85 > > > assignments). > > > > Around 150 columns are getting projected, joined, grouped and > > aggregated > > > > ending in multiple stores. > > > > > > > > Pig 0.11.1 fails with the following error even before any jobs are > > fired: > > > > Pig Stack Trace > > > > --------------- > > > > ERROR 2998: Unhandled internal error. Java heap space > > > > > > > > java.lang.OutOfMemoryError: Java heap space > > > > at java.util.Arrays.copyOf(Arrays.java:2882) > > > > at > > > > > > > > > > java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100) > > > > at > > > > > java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:390) > > > > at java.lang.StringBuilder.append(StringBuilder.java:119) > > > > at > > > > > > > > > > org.apache.pig.newplan.logical.optimizer.LogicalPlanPrinter.depthFirstLP(LogicalPlanPrinter.java:83) > > > > at > > > > > > > > > > org.apache.pig.newplan.logical.optimizer.LogicalPlanPrinter.visit(LogicalPlanPrinter.java:69) > > > > at > > > > > > > > > > org.apache.pig.newplan.logical.relational.LogicalPlan.getSignature(LogicalPlan.java:122) > > > > at org.apache.pig.PigServer.execute(PigServer.java:1237) > > > > at org.apache.pig.PigServer.executeBatch(PigServer.java:333) > > > > at > > > > > > org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:137) > > > > at > > > > > > > > > > org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:198) > > > > at > > > > > > > > > > org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:170) > > > > at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84) > > > > at org.apache.pig.Main.run(Main.java:604) > > > > at org.apache.pig.Main.main(Main.java:157) > > > > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > > > > at > > > > > > > > > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > > > > at > > > > > > > > > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > > > > at java.lang.reflect.Method.invoke(Method.java:597) > > > > at org.apache.hadoop.util.RunJar.main(RunJar.java:160) > > > > > > > > Increasing heap size to 2Gb doesn't help either. The only thing that > > > > appears to get the script working is to disable multi query > > optimization. > > > > Has anyone else faced a similar problem with Pig running out of > memory > > > > while compiling the script? Any other way to get it to work besides > > > > disabling multi-query optimization? > > > > > > > > Thanks, > > > > Shubham. > > > > > > > > > > > > -- > > *Note that I'm no longer using my Yahoo! email address. Please email me > at > > billgra...@gmail.com going forward.* > > >