Ambarish-Giri commented on issue #3395:
URL: https://github.com/apache/hudi/issues/3395#issuecomment-895725694


   Hi @nsivabalan ,
   
   The only difference I was having was an older version of hudi 0.7.0 but now 
I have upgraded that to 0.8.0 as well for verification.
   As suggested I tried the 2 possible configurations for spark2 one with scala 
2.11 and other with scala 2.12 as below :   
   
   scalaVersion := "2.12.11"
   libraryDependencies += "org.apache.spark" %% "spark-core" % "2.4.7"
   libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.4.7"
   libraryDependencies += "org.apache.hudi" %% "hudi-spark-bundle" % "0.8.0"
   libraryDependencies += "org.apache.spark" %% "spark-avro" % "2.4.7"
   
   and 
   
   scalaVersion := "2.11.12"
   libraryDependencies += "org.apache.spark" %% "spark-core" % "2.4.7"
   libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.4.7"
   libraryDependencies += "org.apache.hudi" %% "hudi-spark-bundle" % "0.8.0"
   libraryDependencies += "org.apache.spark" %% "spark-avro" % "2.4.7"
   
   But still no luck.
   
   Just wanted to know if there is some more configurations required while 
querying from MoR table using Read_Optimized query option?
   As currently I am using as below:
   
   spark.read
   .format("hudi")
   .option(DataSourceReadOptions.QUERY_TYPE_OPT_KEY,
   DataSourceReadOptions.QUERY_TYPE_READ_OPTIMIZED_OPT_VAL)
   .load(s"$basePath/$tableName")
   .show(50,false)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to