Hi all,
I am trying to create a custom RDD class for result set of queries
supported in InMobi Grill (http://inmobi.github.io/grill/)

Each result set has a schema (similar to Hive's TableSchema) and a path in
HDFS containing the result set data.

An easy way of doing this would be to create a temp table in Hive, and use
HCatInputFormat to create an RDD using the newAPIHadoopRDD call. I've
already done this and it works.

However, I also want to *delete* the temp table when the RDD is
unpersisted, or when the SparkContext is gone. How could I do that in Spark?

Does Spark allow users to register code to be executed when an RDD is
freed? Something like the OutputCommitter in Hadoop?

Thanks,
Jaideep

-- 
_____________________________________________________________
The information contained in this communication is intended solely for the 
use of the individual or entity to whom it is addressed and others 
authorized to receive it. It may contain confidential or legally privileged 
information. If you are not the intended recipient you are hereby notified 
that any disclosure, copying, distribution or taking any action in reliance 
on the contents of this information is strictly prohibited and may be 
unlawful. If you have received this communication in error, please notify 
us immediately by responding to this email and then delete it from your 
system. The firm is neither liable for the proper and complete transmission 
of the information contained in this communication nor for any delay in its 
receipt.

Reply via email to