is there a easy way to distribute information that UDF relies on from front-end
(pig shell) to back-end (map-reduce jobs)
I'm trying to work out hive table schema from front-end once then get back-end
to use the schema later on.
I was hoping to be able to use either of the contexts to store schema in
getSchema method.
J
On 5 Dec 2010, at 09:03, Daniel Dai wrote:
> There are couple of differences:
> * JobContext is a hadoop concept, UDFContext is a Pig concept
> * JobContext is per Map-reduce job, UDFContext is per Pig store
>
> Usually you store some per store settings inside UDFContext, and retrieve it
> later.
>
> Properties p = UDFContext.getUDFContext().getUDFProperties(this.getClass());
> p.setProperty("someproperty", value);
>
> You do not use JobContext directly in Pig.
>
> Daniel
>
>
> -----Original Message-----
> From: Jae Lee
> Sent: Saturday, December 04, 2010 4:11 AM
> To: [email protected]
> Subject: What's the difference between JobContext, UDFContext?
>
> Hi,
>
> What's the difference between JobContext and UDFContext? and when should I
> use one or the other?
>
> thanks
>
> J=
>
>