Hi all,
I have used compact index for my table and the response time is
same for a query with as well as without index now. Previously, it was
showing improvement. I just changed some parameters to increase heap size
and then it is behaving weird. so, how can I make sure that my query is
u
etting the same
exception..
On 28 July 2011 16:37, Siddharth Ramanan wrote:
> Hi,
> I am adding the log information for a reduce task. I am running hadoop
> in standalone mode.
>
> 2011-07-28 19:16:42,621 ERROR
> org.apache.hadoop.hive.ql.stats.jdbc.JDBCStatsPublisher:
19:16:42,625 INFO org.apache.hadoop.mapred.TaskRunner:
Task:attempt_201107271749_0029_r_19_0 is done. And is in the process of
commiting
2011-07-28 19:16:42,627 INFO org.apache.hadoop.mapred.TaskRunner: Task
'attempt_201107271749_0029_r_19_0' done.
On 28 July 2011 16:19, Siddhar
Hi,
I have a table, which has close to a billion rows.. I am trying to
create an index for the table, when I do the alter command, I always end up
with map-reduce jobs with errors. The same runs fine for small tables
though, I also notice that the number of reducers are set to 24, even if set