Hi Michael,

Thanks for your reply.  Is this the correct way to load data from Spark
into Parquet?  Somehow it doesn't feel right.  When we followed the steps
described for storing the data into Hive tables everything was smooth, we
used HiveContext and the table is automatically recognised by Hive (and
Impala).

When we loaded the data into Parquet using the method I described we used
both SQLContext and HiveContext.  We had to manually define the table using
the CREATE EXTERNAL in Hive.  Then we have to refresh to see changes.

So the problem isn't just the refresh, its that we're unsure of the best
practice for loading data into Parquet tables.  Is the way we are doing the
Spark part correct in your opinion?

Best regards,
Patrick






On 1 August 2014 19:32, Michael Armbrust <mich...@databricks.com> wrote:

> So is the only issue that impala does not see changes until you refresh
> the table?  This sounds like a configuration that needs to be changed on
> the impala side.
>
>
> On Fri, Aug 1, 2014 at 7:20 AM, Patrick McGloin <mcgloin.patr...@gmail.com
> > wrote:
>
>> Sorry, sent early, wasn't finished typing.
>>
>> CREATE EXTERNAL TABLE ....
>>
>> Then we can select the data using Impala.  But this is registered as an
>> external table and must be refreshed if new data is inserted.
>>
>> Obviously this doesn't seem good and doesn't seem like the correct
>> solution.
>>
>> How should we insert data from SparkSQL into a Parquet table which can be
>> directly queried by Impala?
>>
>> Best regards,
>> Patrick
>>
>>
>> On 1 August 2014 16:18, Patrick McGloin <mcgloin.patr...@gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> We would like to use Spark SQL to store data in Parquet format and then
>>> query that data using Impala.
>>>
>>> We've tried to come up with a solution and it is working but it doesn't
>>> seem good.  So I was wondering if you guys could tell us what is the
>>> correct way to do this.  We are using Spark 1.0 and Impala 1.3.1.
>>>
>>> First we are registering our tables using SparkSQL:
>>>
>>> val sqlContext = new SQLContext(sc)
>>> sqlContext.createParquetFile[ParqTable]("hdfs://localhost:8020/user/hive/warehouse/ParqTable.pqt",
>>> true)
>>>
>>> Then we are using the HiveContext to register the table and do the
>>> insert:
>>>
>>> val hiveContext = new HiveContext(sc)
>>> import hiveContext._
>>>
>>> hiveContext.parquetFile("hdfs://localhost:8020/user/hive/warehouse/ParqTable.pqt").registerAsTable("ParqTable")
>>> eventsDStream.foreachRDD(event=>event.insertInto("ParqTable"))
>>>
>>> Now we have the data stored in a Parquet file.  To access it in Hive or
>>> Impala we run
>>>
>>>
>>
>

Reply via email to