Is there no way to find out inside a single reducer how many records were
created by all the Mappers? I tried several ways but nothing works. For
example, I tried this:
reporter.getCounter(Task.Counter.REDUCE_INPUT_RECORDS).getValue();
It's not working for me. Should this have worked? Am I ju
Hi Stack
Thanks for the response. The HBase version we are using is* Version
0.90.4-cdh3u3*
I do close the tables and scanners cautiously after doing the crud
operations. Though this does not happen (finally clause does not execute)
when I terminate/kill the batches or stop the servlet containers
I did a very similar approach and it worked fine for me. Just spot check
the regions after to make sure they look lexicographically sorted. I used
ImmutableBytesWritable as my key, and the default hadoop sorting for that
turned out to sort lexicographically as required. Our hbase rows varied in
Hi,
You need to create your table with pre-split regions.
$hbase org.apache.hadoop.hbase.util.RegionSplitter -c 10 -f region_name
your_table
This command will pre-create 10 regions in your table using MD5 strings as
region boundaries.
You can also customize the splitting algorithm. Please see
On Fri, May 11, 2012 at 10:45 PM, Narendra yadala
wrote:
> Hi Dave
>
> I reuse HBase configuration object as much as possible. I make it static so
> that there is one config per JVM. But the problem is stopping and starting
> the batches or the tomcat container which is where I keep getting the
>
Hi Shashwat,
I will try it but can u send me your core-site.xml and mapred-site.xml,
hdfs-site.xml as well.
Have u added any lines to zoo.cfg ???
> From: dwivedishash...@gmail.com
> To: user@hbase.apache.org
> Subject: RE: Important "Undefined Error"
> Date: Sat, 12 May 2012 23:31:49 +0530
>
The problem is your hbase is not able to connect to Hadoop, can you put your
hbase-site.xml content >> here.. have you specified localhost somewhere, if
so remove localhost from everywhere and put your hdfsl namenode address
suppose your namenode is running on master:9000 then put your hbase file
s
Hi Shashwat,
I want to tell you about my configurations:
I am using 4 nodesOne "Master": Namenode, SecondaryNamenode, Job Tracker,
Zookeeper, HMasterThree "Slaves": datanodes, tasktrackers, regionservers
In both master and slaves, all the hadoop daemons are working well, but as for
the hbase mas
you can turn off hadoop safe mode uisng *hadoop dfsadmin -safemode leave*
On Sat, May 12, 2012 at 8:15 PM, shashwat shriparv <
dwivedishash...@gmail.com> wrote:
> First thing copy core-site.xml, dfs xml from hadoop conf directory to
> hbase conf dirctory. and turn of hadoop save mode and then try
First thing copy core-site.xml, dfs xml from hadoop conf directory to hbase
conf dirctory. and turn of hadoop save mode and then try...
On Sat, May 12, 2012 at 6:27 PM, Harsh J wrote:
> Dalia,
>
> Is your NameNode running fine? The issue is that HBase Master has been
> asked to talk to HDFS, but
Dalia,
Is your NameNode running fine? The issue is that HBase Master has been
asked to talk to HDFS, but it can't connect to the HDFS NameNode. Does
"hadoop dfs -touchz foobar" pass or fail with similar retry issues?
What's your fs.default.name's value in Hadoop's core-site.xml? And
whats the out
Dear Harsh
When I run $hbase master start
I found the following errors:12/05/12 08:32:42 INFO ipc.HBaseRpcMetrics:
Initializing RPC Metrics with hostName=HMaster, port=612/05/12 08:32:42
INFO security.UserGroupInformation: JAAS Configuration already set up for
Hadoop, not re-installing.12/
Hi Dalia,
On Sat, May 12, 2012 at 5:14 PM, Dalia Sobhy wrote:
>
> Dear all,
> I have first a problem with Hbase I am trying to install it on a
> distributed/multinode cluster..
> I am using the cloudera
> https://ccp.cloudera.com/display/CDH4B2/HBase+Installation#HBaseInstallation-StartingtheHB
Dear all,
I have first a problem with Hbase I am trying to install it on a
distributed/multinode cluster..
I am using the cloudera
https://ccp.cloudera.com/display/CDH4B2/HBase+Installation#HBaseInstallation-StartingtheHBaseMaster
But when I write this command
Creating the /hbase Directory in HD
Could it be that you could use Completebulkload and see if that
worksThat must be faster...than HBaseStorage.you could pre-split
using
export HADOOP_CLASSPATH=`hbase classpath`;hbase
org.apache.hadoop.hbase.util.RegionSplitter -c 10 '' -f
On Sat, Apr 28, 2012 at 8:46 PM, M. C. Srivas wr
Wouldn't major_compact trigger a split...if it really needs to split
However if you want to presplit regions for your table you can use the
regionsplitter utility as below:
$export HADOOP_CLASSPATH=`hbase classpath`; hbase
org.apache.hadoop.hbase.util.RegionSplitter
This will give you a usag
16 matches
Mail list logo