create a new table and copy the data from your source table by doing
lower(col1), drop the old table and rename the new table to old table.
On Fri, Mar 16, 2012 at 3:56 AM, Richard wrote:
> if I wang to update a table, e.g,
>
> insert overwrite table mytable
> select lower(col1), col2, col3 from
Hi Dani
When you say that your data is on two hadoop clusters, it means a
Mapreduce job has to spawn its tasks across 2 or more clusters. AFAIK atm this
is out of scope of MR frame work. So the answer is no. You
can't materialize such a join with hive.
Regards
Bejoy.K.S
Hi Edward,
Could you please clarify what you mean in your last paragraph? You found
Pig Latin a week framework in terms of MapReduce?
Thanks again for the response.
Mahsa
On Sat, Mar 17, 2012 at 12:04 PM, Edward Capriolo wrote:
> in general hive does not offer features it can not do well. Cro
I understand Hive submits translated MR to "a" jobtracker. My end goal is
generic, to re-iterate : "I'm trying to figure out, if its possible to join
tables from different Hadoop clusters." . (Without moving data) Using
something or trying to write own wrapper ?
On Sat, Mar 17, 2012 at 6:31 AM, wd
in general hive does not offer features it can not do well. Cross joins on
any data set where one table is not very small do not scale in map reduce.
So there is not a big win for offering syntax for it.
Not talking about pig but one very common unnamed map reduce framework
offers Many features th
Hive does not 'join' your data, it's all done by hadoop.
On Sat, Mar 17, 2012 at 7:27 AM, Dani Rayan wrote:
> Can Hive be configured to work with multiple namenodes(clusters)? I
> understand we can use command 'SET' to set any hadoop (or hive)
> configuration variable. But is it possible to handl
S.O.P.("fields : "+res.getString(0));
according type of your fields you can acc more res.getint(1)
res.getString(2) like this up to 30 so this will give all the columns you
needed.
On Fri, Mar 16, 2012 at 11:17 AM, Bhavesh Shah wrote:
>
> Hi,
> I am trying to implement a task in Hive like Stor
Try scan.setCaching(500); or set the range in scanner accordingly and keep
on moving the scanner as scaner(1,1),scaner(10001, 3) like wise.
scan.setCaching(500);
these are the ways i can think of hope more replies from more experienced
and knowledgeable people.
On Fri, Mar 16, 2012 at 2: