Hi Divya, That's strange. Are you able to post a snippet of your code to look at? And are you sure that you're saving the dataframes as per the docs ( https://phoenix.apache.org/phoenix_spark.html)?
Depending on your HDP version, it may or may not actually have phoenix-spark support. Double-check that your Spark configuration is setup with the right worker/driver classpath settings. and that the phoenix JARs contain the necessary phoenix-spark classes (e.g. org.apache.phoenix.spark.PhoenixRelation). If not, I suggest following up with Hortonworks. Josh On Fri, Apr 8, 2016 at 1:22 AM, Divya Gehlot <divya.htco...@gmail.com> wrote: > Hi, > I hava a Hortonworks Hadoop cluster having below Configurations : > Spark 1.5.2 > HBASE 1.1.x > Phoenix 4.4 > > I am able to connect to Phoenix through JDBC connection and able to read > the Phoenix tables . > But while writing the data back to Phoenix table > I am getting below error : > > org.apache.spark.sql.AnalysisException: > org.apache.phoenix.spark.DefaultSource does not allow user-specified > schemas.; > > Can any body help in resolving the above errors or any other solution of > saving Spark Dataframes to Phoenix. > > Would really appareciate the help. > > Thanks, > Divya >