Hi,

I have some shells needed to run on spark. I want to read and write data stored 
in hawk but I don’t want to modify my shells too much. How should I modify 
SparkSession to make spark work with hawq just like working with hive?  For 
example, I can use “.enableHiveSupport()” to enable Hive and I just want to 
change my shells by using like “.enableHawqSupport()” statement.

Thanks,
Zewen Chi
---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

Reply via email to