In the Pig code base check out
src/org/apache/pig/backend/hadoop/executionengine/mapReduceLayer/MRCompiler.java
and
src/org/apache/pig/backend/hadoop/executionengine/mapReduceLayer/JobControlCompiler.java
These are the classes the control the generation of MapReduce jobs.
Alan.
On Feb 25, 201
Hi Alan,
Thanks a lot for the great suggestion. The book looks very helpful.
William
On Sun, Feb 24, 2013 at 11:54 AM, Alan Gates wrote:
> For books, check out
> http://www.amazon.com/Programming-Pig-Alan-Gates/dp/1449302645/ref=sr_1_1?ie=UTF8&qid=1361724828&sr=8-1&keywords=programming+pig
>
>
> a set of MapReduce jobs
On Feb 25, 2013, at 1:35 PM, Alan Gates wrote:
> Pig generates several DAGs (a logical plan, a physical plan, a set of
> MapReduce jobs). Which one are you interested in?
>
> Alan.
>
> On Feb 25, 2013, at 12:02 PM, Preeti Gupta wrote:
>
>> Hi,
>>
>> I need to d
Pig generates several DAGs (a logical plan, a physical plan, a set of MapReduce
jobs). Which one are you interested in?
Alan.
On Feb 25, 2013, at 12:02 PM, Preeti Gupta wrote:
> Hi,
>
> I need to do some modifications here and need to know how Pig generates DAG.
> Can someone throw some li
Hi,
I need to do some modifications here and need to know how Pig generates DAG.
Can someone throw some light on this?
regards
preeti
Hi Fangfang,
This is fixed by PIG-3099:
https://issues.apache.org/jira/browse/PIG-3099
Please try Pig-0.10.1 or Pig-0.11.
Thanks,
Cheolsoo
On Mon, Feb 25, 2013 at 3:10 AM, lulynn_2008 wrote:
> Hi All,
> I am using pig-0.10.0 with hadoop-1.1.1, and following tests failed with
> the informati
I don't think I've seen anyone write loaders for NetCDF, but there is no
reason one couldn't, as far as I know.
Just need to write a Hadoop InputFormat / RecordReader that implements the
format, and wrap a thing LoadFunc around it.
There is some basic documentation here :
https://pig.apache.org/do
Hi Everyone,
I am new to PIG. I was wondering if PIG can work with NetCDF or can be made to
work with it?
-preeti
Thanks! I'm still new to pig, but it's very informative.
On Fri, Feb 22, 2013 at 6:35 PM, Dmitriy Ryaboy wrote:
> I pulled together some of the highlights of the pig 0.11 release on the
> Apache Pig blog (which now officially exists!):
>
> https://blogs.apache.org/pig/
>
> D
Hi Johnny,
Thank you for your help.
Yes indeed, setting mapred.min.split.size to 1 or 10Mb increased greatly
the number of mapper and thus made the job complete successfully.
For the reducers however, we can only have as much reducers as machines
running (by setting default_parallel) and this is a
Hi All,
I am using pig-0.10.0 with hadoop-1.1.1, and following tests failed with the
information:
Testcase: testSkewedJoin took 45.001 sec
FAILED
expected:<0> but was:<2>
junit.framework.AssertionFailedError: expected:<0> but was:<2>
at
org.apache.pig.test.TestEmptyInputDir.test
Thanks Dmitriy!
very useful
--
regards,
pozdrawiam,
Jakub Glapa
On Sat, Feb 23, 2013 at 2:35 AM, Dmitriy Ryaboy wrote:
> I pulled together some of the highlights of the pig 0.11 release on the
> Apache Pig blog (which now officially exists!):
>
> https://blogs.apache.org/pig/
>
> D
>
12 matches
Mail list logo