machine as the region server and
> what versions of hadoop and hbase you are running.
>
> ---
> Jim Kellerman, Powerset (Live Search, Microsoft Corporation)
>
> > -Original Message-
> > From: Rui Xing [mailto:[EMAIL PROTECTED]
> > Sent: Thursday, October 16, 20
Hello List,
We encountered an out-of-memory error in data loading. We have 5 data nodes
and 1 name node distributed on 6 machines. Block-level compression was used.
Following is the log output. Seems the problem was caused in compression. Is
there anybody who ever experienced such error? Any helps
help!!
Xing
Hi All,
There may be something wrong with hadoop but the administration page
shows everything OK.
It becomes very slow even after I stop and start dfs, and I have some
messages for error, like
Failed to rename output with the exception: java.net.SocketTimeoutException:
timed out waiting for
If I use one node for reduce, hadoop can sort the result.
If I use 30 nodes for reduce, the result is part-0 ~ part-00029.
How make all the 30 parts sort globally and all the files in part-1
are greater that part-0 ?
Thanks a lot
Xing
Hi All,
There are 30 output folders using Hadoop. Each folder it is in ascending
order, but the order is not ascending among folders, like the value is
1, 5 , 10 in folder A and 6, 8, 9 in folder B.
My question is how to enforce the order among all the folders as well,
as output value 1, 5, 6