Not sure I understand. Are you saying that pig takes -D<> parameters
directly. Will the following work :
"pig -Dmapred.task.timeout=0 -f myfile.pig"
On Thu, Jan 28, 2010 at 11:08 AM, Amogh Vasekar wrote:
> Hi,
> You should be able to pass this as a cmd line argument using -D ... If you
> want
Hi,
You should be able to pass this as a cmd line argument using -D ... If you want
to change it for all jobs on your own cluster, it would be in mapred-site.
Amogh
On 1/28/10 11:03 AM, "prasenjit mukherjee"
wrote:
Thanks Amogh for your quick response. Changing this property only on
master's
Thanks Amogh for your quick response. Changing this property only on
master's hadoop-site.xml will do or I need to do it on all the slaves as
well ?
Any way I can do this from PIG ( or I guess I am asking too much here :) )
On Thu, Jan 28, 2010 at 10:57 AM, Amogh Vasekar wrote:
> Yes, parameter
Yes, parameter is mapred.task.timeout in mS.
You can also update status / output to stdout after some time chunks to avoid
this :)
Amogh
On 1/28/10 10:52 AM, "prasenjit mukherjee"
wrote:
Now I see. The tasks are failing with the following error message :
*Task attempt_201001272359_0001_r_00
Now I see. The tasks are failing with the following error message :
*Task attempt_201001272359_0001_r_00_0 failed to report status for 600
seconds. Killing!*
Looks like hadoop kills/restarts jobs which takes more than 600 seconds. Is
there any way I can increase it to some very high number
PIG_CLASSPATH= pig
Alan.
On Jan 27, 2010, at 11:54 AM, Aryeh Berkowitz wrote:
When I run Pig, I connect to the local file system, when I run (java
-cp pig-0.5.0-core.jar:$HADOOP_HOME/conf org.apache.pig.Main) I
connect to hdfs. It seems like Pig is not finding my hadoop conf
directory. Wh
When I run Pig, I connect to the local file system, when I run (java -cp
pig-0.5.0-core.jar:$HADOOP_HOME/conf org.apache.pig.Main) I connect to hdfs. It
seems like Pig is not finding my hadoop conf directory. Where do I specify this?
Thanks Rekha.
These issues seem to be related to cleaning up Pig/Hadoop file upon shutdown
of the VM. I just checked and when I shut down the VM, all files are cleaned
up as expected.
My issue is that I have Pig jobs that run in an app server which are
triggered by quartz. It might be days or wee
Felix,
It looks like you are using the piggybank from trunk, while the
version of pig you are on is 0.5. There are new packages and classes
and even some interface changes in the 0.7 (trunk) piggybank, they
aren't compatible. Grab the piggybank from the 0.5 branch.
-D
On Tue, Jan 26, 2010 at 10:5
Before building in piggybank you need to do 'ant jar compile-test' at
the top level. From the error messages I'm guessing you didn't do that.
Alan.
On Jan 26, 2010, at 10:53 PM, felix gao wrote:
Hi all,
Just downloaded it and when following the instruction to build there
is
compilation
Hi all,
Just downloaded it and when following the instruction to build there is
compilation errors. Please let me know how to fix this.
Thanks,
Felix
/usr/local/pig > echo $CLASSPATH
/usr/local/hadoop/hadoop-0.20.1-core.jar:/usr/local/hadoop/hadoop-0.2
11 matches
Mail list logo