Are you saying passing configurations to UDF? You can either:
1. Use -D option in command line
2. Put key-value entries into pig.properties, and put the folder containing pig.properties into classpath 3. Create a text file contains key-value pairs, pass it to Pig using command line option -P propertyfile

Once you put your key-value pair, you can read it back in UDF using the following code:
UDFContext.getUDFContext().getJobConf().get(key);

Daniel

-----Original Message----- From: Jae Lee
Sent: Sunday, December 05, 2010 3:11 AM
To: [email protected]
Subject: Re: What's the difference between JobContext, UDFContext?

is there a easy way to distribute information that UDF relies on from front-end (pig shell) to back-end (map-reduce jobs)

I'm trying to work out hive table schema from front-end once then get back-end to use the schema later on.

I was hoping to be able to use either of the contexts to store schema in getSchema method.

J

On 5 Dec 2010, at 09:03, Daniel Dai wrote:

There are couple of differences:
* JobContext is a hadoop concept, UDFContext is a Pig concept
* JobContext is per Map-reduce job, UDFContext is per Pig store

Usually you store some per store settings inside UDFContext, and retrieve it
later.

Properties p = UDFContext.getUDFContext().getUDFProperties(this.getClass());
p.setProperty("someproperty", value);

You do not use JobContext directly in Pig.

Daniel


-----Original Message----- From: Jae Lee
Sent: Saturday, December 04, 2010 4:11 AM
To: [email protected]
Subject: What's the difference between JobContext, UDFContext?

Hi,

What's the difference between JobContext and UDFContext? and when should I
use one or the other?

thanks

J=


Reply via email to