In terms of the nullPointerException, i think it is bug. since the test
data directories might be moved already. so it failed to load the test data
to create the test tables. You may create a jira for this.

On Fri, Jan 13, 2017 at 11:44 AM, Xin Wu <xwu0...@gmail.com> wrote:

> If you are using spark-shell, you have instance "sc" as the SparkContext
> initialized already. If you are writing your own application, you need to
> create a SparkSession, which comes with the SparkContext. So you can
> reference it like sparkSession.sparkContext.
>
> In terms of creating a table from DataFrame, do you intend to create it
> via TestHive? or just want to create a Hive serde table for the DataFrame?
>
> On Fri, Jan 13, 2017 at 10:23 AM, Nicolas Tallineau <
> nicolas.tallin...@ubisoft.com> wrote:
>
>> But it forces you to create your own SparkContext, which I’d rather not
>> do.
>>
>>
>>
>> Also it doesn’t seem to allow me to directly create a table from a
>> DataFrame, as follow:
>>
>>
>>
>> TestHive.createDataFrame[MyType](rows).write.saveAsTable("a_table")
>>
>>
>>
>> *From:* Xin Wu [mailto:xwu0...@gmail.com]
>> *Sent:* 13 janvier 2017 12:43
>> *To:* Nicolas Tallineau <nicolas.tallin...@ubisoft.com>
>> *Cc:* user@spark.apache.org
>> *Subject:* Re: [Spark SQL - Scala] TestHive not working in Spark 2
>>
>>
>>
>> I used the following:
>>
>>
>> val testHive = new org.apache.spark.sql.hive.test.TestHiveContext(sc,
>> *false*)
>>
>> val hiveClient = testHive.sessionState.metadataHive
>> hiveClient.runSqlHive(“….”)
>>
>>
>>
>> On Fri, Jan 13, 2017 at 6:40 AM, Nicolas Tallineau <
>> nicolas.tallin...@ubisoft.com> wrote:
>>
>> I get a nullPointerException as soon as I try to execute a
>> TestHive.sql(...) statement since migrating to Spark 2 because it's trying
>> to load non existing "test tables". I couldn't find a way to switch to
>> false the loadTestTables variable.
>>
>>
>>
>> Caused by: sbt.ForkMain$ForkError: java.lang.NullPointerException: null
>>
>>                 at org.apache.spark.sql.hive.test
>> .TestHiveSparkSession.getHiveFile(TestHive.scala:190)
>>
>>                 at org.apache.spark.sql.hive.test
>> .TestHiveSparkSession.org$apache$spark$sql$hive$test$TestHiv
>> eSparkSession$$quoteHiveFile(TestHive.scala:196)
>>
>>                 at org.apache.spark.sql.hive.test
>> .TestHiveSparkSession.<init>(TestHive.scala:234)
>>
>>                 at org.apache.spark.sql.hive.test
>> .TestHiveSparkSession.<init>(TestHive.scala:122)
>>
>>                 at org.apache.spark.sql.hive.test
>> .TestHiveContext.<init>(TestHive.scala:80)
>>
>>                 at org.apache.spark.sql.hive.test
>> .TestHive$.<init>(TestHive.scala:47)
>>
>>                 at org.apache.spark.sql.hive.test
>> .TestHive$.<clinit>(TestHive.scala)
>>
>>
>>
>> I’m using Spark 2.1.0 in this case.
>>
>>
>>
>> Am I missing something or should I create a bug in Jira?
>>
>>
>>
>>
>>
>> --
>>
>> Xin Wu
>> (650)392-9799 <(650)%20392-9799>
>>
>
>
>
> --
> Xin Wu
> (650)392-9799 <(650)%20392-9799>
>



-- 
Xin Wu
(650)392-9799

Reply via email to