Just like with normal Spark Jobs, that command returns an RDD that contains
the lineage for computing the answer but does not actually compute the
answer.  You'll need to run collect() on the RDD in order to get the result.


On Mon, Aug 25, 2014 at 11:46 AM, S Malligarjunan <
smalligarju...@yahoo.com.invalid> wrote:

> Hello All,
>
> I have executed the following udf sql in my spark hivecontext,
>
> hiveContext.hql(select count(t1.col1) from t1 join t2 where myUDF(t1.id ,
> t2.id) = true)
> Where do i find the count output?
>
> Thanks and Regards,
> Sankar S.
>
>

Reply via email to