Hi Kumar,
Altering the table just update's Hive's metadata without updating parquet's
schema.
I believe that if you'll insert to your table (after adding the column) you'll
be able to later on select all 3 columns.
Daniel
On 14 בינו׳ 2015, at 21:34, Kumar V kumarbuyonl...@yahoo.com wrote:
Can you run your query with following config:
hive set hive.fetch.task.conversion=none;
and run your two queries with this. Lets see if this makes a difference. My
expectation is this will result in MR job getting launched and thus
runtimes might be different.
On Sat, Jan 10, 2015 at 4:54 PM,
Hi,
Any ideas on how to go about this ? Any insights you have would be helpful.
I am kinda stuck here.
Here are the steps I followed on hive 0.13
1) create table t (f1 String, f2 string) stored as Parquet;2) upload parquet
files with 2 fields3) select * from t; Works fine.4) alter table
Hi, Thanks for your response.I can't do another insert as the data is
already in the table. Also, since there is a lot of data in the table already,
I am trying to find a way to avoid reprocessing/reloading.
Thanks.
On Wednesday, January 14, 2015 2:47 PM, Daniel Haviv
Just a heads up. For anyone that has been using jdbc:hive
I noticed a recent hive...
jdbc:hive2://myhost:port
SQL exception Invalid URL
It might be better if the exception said Invalid URL. Url must start with
jdbc:hive
*Hive 13
I'm storing a sales amount column as a double in an ORC table and when I do:
select sum(x) from sometable
I get a value like 4.79165141174808E9
A visual inspection of the column values reveals no glaring anomalies...all
looks pretty normal. If I do the same thing in a textfile table
After I changed org.apache.hcatalog.pig.HCatStorer() to
org.apache.hive.hcatalog.pig.HCatStorer(), it worked.
Patcharee
On 01/14/2015 02:57 PM, Patcharee Thongtra wrote:
Hi,
I am having a weird problem. I created a table in orc format:
Create table
Hi,
I have deleted the original hive metadata database from mysql, re-created a
new one with character set ='latin1';
also put the jar file into HDFS with a shorter file name, the 'max key length
is 767 bytes’ issue from mysql is resolved.
Tried again:
1) drop function sysdate;
2) CREATE
Hi,
I am having a weird problem. I created a table in orc format:
Create table
create external table cossin (x int, y int, cos float, sin float)
PARTITIONED BY(zone int) stored as orc location