yTable
>
> I also migrated for parquet to ORC, not sure if this have an impact or not.
>
> Thanks you for our help.
>
> From: Mich Talebzadeh <mich.talebza...@gmail.com>
> Date: Sunday, April 10, 2016 at 11:54 PM
> To: maurin lenglart <mau...@cuberonlabs.com>
> C
rin lenglart <mau...@cuberonlabs.com<mailto:mau...@cuberonlabs.com>>
Cc: "user @spark" <user@spark.apache.org<mailto:user@spark.apache.org>>
Subject: Re: alter table add columns aternatives or hive refresh
This should work. Make sure that you use HiveContext.sql and sqlConte
Cc: "user @spark" <user@spark.apache.org<mailto:user@spark.apache.org>>
Subject: Re: alter table add columns aternatives or hive refresh
This should work. Make sure that you use HiveContext.sql and sqlContext
correctly
This is an example in Spark, reading a CSV file, doing som
w>*
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> On 10 April 2016 at 19:34, Maurin Lenglart <mau...@cuberonlabs.com> wrote:
>
>> Hi,
>> So basically you are telling me that I need to recreate a table, and
>> re-insert everything every
mail.com<mailto:mich.talebza...@gmail.com>>
Date: Sunday, April 10, 2016 at 12:25 PM
To: "user @spark" <user@spark.apache.org<mailto:user@spark.apache.org>>
Subject: Re: alter table add columns aternatives or hive refresh
Hi,
I am confining myself to Hive tables.
s that will allow me not to move TB of data
> everyday?
>
> Thanks for you answer
>
> From: Mich Talebzadeh <mich.talebza...@gmail.com>
> Date: Sunday, April 10, 2016 at 3:41 AM
> To: maurin lenglart <mau...@cuberonlabs.com>
> Cc: "user@spark.apache.org" &
bs.com>>
Cc: "user@spark.apache.org<mailto:user@spark.apache.org>"
<user@spark.apache.org<mailto:user@spark.apache.org>>
Subject: Re: alter table add columns aternatives or hive refresh
I have not tried it on Spark but the column added in Hive to an existing table
I have not tried it on Spark but the column added in Hive to an existing
table cannot be updated for existing rows. In other words the new column is
set to null which does not require the change in the existing file length.
So basically as I understand when a column is added to an already table.
Hi,
I am trying to add columns to table that I created with the “saveAsTable” api.
I update the columns using sqlContext.sql(‘alter table myTable add columns
(mycol string)’).
The next time I create a df and save it in the same table, with the new columns
I get a :
“ParquetRelation
requires