Hi,
I found one little mistake in In the hbase-ec2-init-remote.sh:
# Update classpath to include HBase jars and config
cat >> $HADOOP_HOME/conf/hadoop-env.sh <
eed more than 2.5 hours.
I used the same input data: about 580MB for my test. I was wonder about why
the difference of performace is so huge.
Anybody has ideas?
Thanks a lot!
stchu
... :P
stchu
2009/9/23 stack
> Yes, what Erik said. MapFile is a binary format. What you are some
> preamble up front listing the key and value class types plus some
> miscellaneous meta data. Then, per key and value, these are serialized
> Writable types.
>
> Move to hba
s that "offset" and/or
"timestamps"?
Besides, since hbase store the mapfile depends on columnfamily, why we need
to save that (in this case: Level0 and Level1)?
I appreciate your helps or guides.
stchu
data from the first cluster to the second one
directly?
If I can't do so, is there any simple way to do this?
Any suggestion or guide would be appreciate!
stchu
is this? How much data are you writing? What's
> your cluster like (#nodes, hardware, etc)?
>
> It would also be useful to see your master log around where it began
> to fail, you should see exceptions showing up.
>
> Thx,
>
> J-D
>
> On Sat, Aug 8, 2009 at 11:25 P
Hi,
I tried to write the large amount of data into HBase. The map processes
could be completed, but while processing the reduce tasks, the job is
failed.
The error message is shown as follow:
...
09/08/07 19:45:47 INFO mapred
Hi, Erik,
Thanks for your suggestion. I tried to import some subsets of my data.
The subsets are 350MB, 995MB, 2.4GB and 4GB. The first two are completed
without any exception but the latter two fail with the same exceptions.
I will try to do these jobs on a larger cluster, thanks a lot!
stchu
h about 3.3 billions rows.
The size of reduce input is about 3 times of map input. I used Hadoop 0.19.1
and Hbase 0.19.3. And I tried 12 and 53 as the numReduceTask but both
failed. Could anyone give me a help? Thanks a lot.
stchu
Hi,
I try to import a large HDFS doc (about 4GB) into HBase.
The map class works with TextInputFormat provided by Hadoop, and
the reduce class is implement with TableReduce.
The Map process complete without any problem but Region Server crashed in
reduce stages.
The log shows:
ent: Retrying conne
Hi,
I try to import a large HDFS doc (about 4GB) into HBase.
The map class works with TextInputFormat provided by Hadoop, and
the reduce class is implement with TableReduce.
The Map process complete without any problem but Region Server crashed in
reduce stages.
The log shows:
ent: Retrying conne
11 matches
Mail list logo